Jan 31 06:42:51 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 06:42:51 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:51 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 06:42:52 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 06:42:53 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 06:42:55 crc kubenswrapper[4687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 06:42:55 crc kubenswrapper[4687]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 06:42:55 crc kubenswrapper[4687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 06:42:55 crc kubenswrapper[4687]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 06:42:55 crc kubenswrapper[4687]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 06:42:55 crc kubenswrapper[4687]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.132877 4687 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145176 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145236 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145241 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145246 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145252 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145256 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145260 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145263 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145267 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145271 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145276 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145284 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145293 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145299 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145304 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145309 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145313 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145318 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145322 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145326 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145330 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145334 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145338 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145342 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145346 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145350 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145354 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145357 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145361 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145366 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145370 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145374 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145377 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145380 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145384 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145399 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145426 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145430 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145435 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145439 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145443 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145448 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145452 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145456 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145460 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145464 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145468 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145471 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145475 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145480 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145483 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145487 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145491 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145495 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145499 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145503 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145507 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145512 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145516 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145519 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145523 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145526 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145530 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145533 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145538 4687 feature_gate.go:330] unrecognized feature gate: Example Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145541 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145547 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145552 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145556 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145560 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.145563 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145676 4687 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145690 4687 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145701 4687 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145708 4687 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145717 4687 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145722 4687 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145730 4687 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145737 4687 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145742 4687 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145746 4687 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145752 4687 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145756 4687 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145761 4687 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145766 4687 flags.go:64] FLAG: --cgroup-root="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145770 4687 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145774 4687 flags.go:64] FLAG: --client-ca-file="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145779 4687 flags.go:64] FLAG: --cloud-config="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145783 4687 flags.go:64] FLAG: --cloud-provider="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145787 4687 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145794 4687 flags.go:64] FLAG: --cluster-domain="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145798 4687 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145802 4687 flags.go:64] FLAG: --config-dir="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145807 4687 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145811 4687 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145818 4687 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145823 4687 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145828 4687 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145834 4687 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145839 4687 flags.go:64] FLAG: --contention-profiling="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145843 4687 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145847 4687 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145852 4687 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145856 4687 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145863 4687 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145868 4687 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145873 4687 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145879 4687 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145886 4687 flags.go:64] FLAG: --enable-server="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145891 4687 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145899 4687 flags.go:64] FLAG: --event-burst="100" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145904 4687 flags.go:64] FLAG: --event-qps="50" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145909 4687 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145914 4687 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145920 4687 flags.go:64] FLAG: --eviction-hard="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145930 4687 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145934 4687 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145938 4687 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145945 4687 flags.go:64] FLAG: --eviction-soft="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145950 4687 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145954 4687 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145958 4687 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145962 4687 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145967 4687 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145971 4687 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145975 4687 flags.go:64] FLAG: --feature-gates="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145982 4687 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145988 4687 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.145994 4687 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146001 4687 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146008 4687 flags.go:64] FLAG: --healthz-port="10248" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146014 4687 flags.go:64] FLAG: --help="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146019 4687 flags.go:64] FLAG: --hostname-override="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146023 4687 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146029 4687 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146034 4687 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146038 4687 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146045 4687 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146049 4687 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146056 4687 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146061 4687 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146065 4687 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146069 4687 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146074 4687 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146078 4687 flags.go:64] FLAG: --kube-reserved="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146083 4687 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146087 4687 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146091 4687 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146095 4687 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146099 4687 flags.go:64] FLAG: --lock-file="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146103 4687 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146108 4687 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146112 4687 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146120 4687 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146126 4687 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146130 4687 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146135 4687 flags.go:64] FLAG: --logging-format="text" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146139 4687 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146144 4687 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146148 4687 flags.go:64] FLAG: --manifest-url="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146153 4687 flags.go:64] FLAG: --manifest-url-header="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146161 4687 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146165 4687 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146171 4687 flags.go:64] FLAG: --max-pods="110" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146176 4687 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146180 4687 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146185 4687 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146189 4687 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146193 4687 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146198 4687 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146203 4687 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146217 4687 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146222 4687 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146232 4687 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146241 4687 flags.go:64] FLAG: --pod-cidr="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146247 4687 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146257 4687 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146263 4687 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146268 4687 flags.go:64] FLAG: --pods-per-core="0" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146273 4687 flags.go:64] FLAG: --port="10250" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146278 4687 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146283 4687 flags.go:64] FLAG: --provider-id="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146287 4687 flags.go:64] FLAG: --qos-reserved="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146291 4687 flags.go:64] FLAG: --read-only-port="10255" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146296 4687 flags.go:64] FLAG: --register-node="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146300 4687 flags.go:64] FLAG: --register-schedulable="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146305 4687 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146314 4687 flags.go:64] FLAG: --registry-burst="10" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146319 4687 flags.go:64] FLAG: --registry-qps="5" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146323 4687 flags.go:64] FLAG: --reserved-cpus="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146328 4687 flags.go:64] FLAG: --reserved-memory="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146334 4687 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146339 4687 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146344 4687 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146348 4687 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146352 4687 flags.go:64] FLAG: --runonce="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146357 4687 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146361 4687 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146366 4687 flags.go:64] FLAG: --seccomp-default="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146370 4687 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146374 4687 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146380 4687 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146384 4687 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146389 4687 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146394 4687 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146398 4687 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146402 4687 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146423 4687 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146428 4687 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146433 4687 flags.go:64] FLAG: --system-cgroups="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146437 4687 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146446 4687 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146451 4687 flags.go:64] FLAG: --tls-cert-file="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146455 4687 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146463 4687 flags.go:64] FLAG: --tls-min-version="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146467 4687 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146471 4687 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146475 4687 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146480 4687 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146484 4687 flags.go:64] FLAG: --v="2" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146492 4687 flags.go:64] FLAG: --version="false" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146499 4687 flags.go:64] FLAG: --vmodule="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146505 4687 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146510 4687 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146630 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146636 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146641 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146645 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146649 4687 feature_gate.go:330] unrecognized feature gate: Example Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146653 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146657 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146661 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146665 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146669 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146675 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146679 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146684 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146689 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146693 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146698 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146703 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146707 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146710 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146714 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146719 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146723 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146727 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146731 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146736 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146741 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146745 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146750 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146753 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146758 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146762 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146766 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146770 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146774 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146780 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146786 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146790 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146795 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146801 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146805 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146809 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146815 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146819 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146823 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146834 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146840 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146846 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146850 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146855 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146859 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146863 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146867 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146871 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146878 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146882 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146886 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146891 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146896 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146900 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146904 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146908 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146912 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146916 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146921 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146925 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146929 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146934 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146938 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146943 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146947 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.146951 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.146959 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.167982 4687 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.168045 4687 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168173 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168187 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168196 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168204 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168214 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168222 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168230 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168238 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168246 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168255 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168263 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168271 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168279 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168287 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168294 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168302 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168310 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168318 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168326 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168334 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168342 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168350 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168362 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168373 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168382 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168391 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168399 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168435 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168446 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168455 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168464 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168472 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168481 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168492 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168503 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168512 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168520 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168529 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168538 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168546 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168553 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168562 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168569 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168578 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168585 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168593 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168604 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168613 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168622 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168630 4687 feature_gate.go:330] unrecognized feature gate: Example Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168637 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168645 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168654 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168665 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168674 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168682 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168689 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168697 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168705 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168713 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168721 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168728 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168735 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168743 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168751 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168759 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168766 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168774 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168781 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168789 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.168798 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.168811 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169033 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169046 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169054 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169063 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169071 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169079 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169088 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169096 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169104 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169113 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169120 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169128 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169136 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169144 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169152 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169160 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169167 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169175 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169183 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169191 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169198 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169206 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169217 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169226 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169235 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169244 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169252 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169260 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169270 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169281 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169289 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169298 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169306 4687 feature_gate.go:330] unrecognized feature gate: Example Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169315 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169332 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169340 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169347 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169355 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169363 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169371 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169379 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169387 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169395 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169443 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169452 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169460 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169468 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169479 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169488 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169497 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169505 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169514 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169523 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169531 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169539 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169547 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169555 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169562 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169570 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169577 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169585 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169593 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169602 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169611 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169620 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169628 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169636 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169644 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169651 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169658 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.169667 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.169680 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.170147 4687 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.184853 4687 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.185007 4687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.189295 4687 server.go:997] "Starting client certificate rotation" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.189346 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.191561 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 22:18:46.274612416 +0000 UTC Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.191747 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.287171 4687 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.289019 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.292073 4687 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.353158 4687 log.go:25] "Validated CRI v1 runtime API" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.466471 4687 log.go:25] "Validated CRI v1 image API" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.469021 4687 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.474940 4687 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-06-38-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.474989 4687 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.496321 4687 manager.go:217] Machine: {Timestamp:2026-01-31 06:42:55.494477679 +0000 UTC m=+1.771737294 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0982288b-de9c-4e82-b208-7781320b1d02 BootID:8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:03:0f:6e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:03:0f:6e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:37:99:dd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e6:f3:85 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c4:61:c2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:38:f9:22 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:8d:b0:f4:e0:fb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:32:e4:0b:c0:0e:69 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.496669 4687 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.496816 4687 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.497214 4687 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.497538 4687 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.497584 4687 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.497826 4687 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.497839 4687 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.498310 4687 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.498346 4687 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.498526 4687 state_mem.go:36] "Initialized new in-memory state store" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.498853 4687 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.503226 4687 kubelet.go:418] "Attempting to sync node with API server" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.503267 4687 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.503292 4687 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.503350 4687 kubelet.go:324] "Adding apiserver pod source" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.503373 4687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.510638 4687 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.511196 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.511303 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.511453 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.511597 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.515963 4687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.518745 4687 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527037 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527073 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527085 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527097 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527117 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527130 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527145 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527165 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527188 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527202 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527307 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527328 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527369 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.527943 4687 server.go:1280] "Started kubelet" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.528031 4687 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.528582 4687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.529111 4687 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.529851 4687 server.go:460] "Adding debug handlers to kubelet server" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.529753 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:55 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.531557 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.531597 4687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.532213 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:13:15.11802763 +0000 UTC Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.532846 4687 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.535540 4687 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.532835 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.535746 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="200ms" Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.536667 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.536734 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.536911 4687 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.547289 4687 factory.go:55] Registering systemd factory Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.547327 4687 factory.go:221] Registration of the systemd container factory successfully Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.547745 4687 factory.go:153] Registering CRI-O factory Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.547765 4687 factory.go:221] Registration of the crio container factory successfully Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.547858 4687 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.547893 4687 factory.go:103] Registering Raw factory Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.547911 4687 manager.go:1196] Started watching for new ooms in manager Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.549482 4687 manager.go:319] Starting recovery of all containers Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.547578 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.23:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fbdb7c2fcdc83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 06:42:55.527910531 +0000 UTC m=+1.805170116,LastTimestamp:2026-01-31 06:42:55.527910531 +0000 UTC m=+1.805170116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551449 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551546 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551562 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551577 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551590 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551603 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551616 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551628 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551644 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551657 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551670 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551684 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551698 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551715 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551731 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551744 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551758 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551772 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551785 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551800 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551865 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551882 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551896 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551908 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551924 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551938 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551981 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.551999 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.552019 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.552033 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.552045 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.552058 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.552072 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.552087 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.552101 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555628 4687 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555658 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555679 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555695 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555711 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555726 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555739 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555754 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555767 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555783 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555798 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555812 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555825 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555840 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555852 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555867 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555881 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555897 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555915 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555932 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555949 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555964 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555979 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.555994 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556008 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556021 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556034 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556048 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556061 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556075 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556121 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556154 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556170 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556183 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556197 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556210 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556225 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556238 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556251 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556265 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556291 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556308 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556320 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556334 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556347 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556362 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556375 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556388 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556401 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556470 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556487 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556503 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556518 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556532 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556547 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556561 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556573 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556586 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556599 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556613 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556627 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556640 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556654 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556666 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.556679 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557591 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557610 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557624 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557651 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557666 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557690 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557707 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557724 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557756 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557774 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557789 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557807 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557823 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557839 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557855 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557869 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557886 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557904 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557921 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557935 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557947 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557961 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557974 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.557987 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558002 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558016 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558030 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558043 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558056 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558070 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558083 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558096 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558109 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558123 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558135 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558150 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558164 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558178 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558191 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558204 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558217 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558229 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558243 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558256 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558268 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558282 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558295 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558350 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558364 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558378 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558395 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558429 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558443 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558455 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558470 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558484 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558499 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558511 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558527 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558542 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558557 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558608 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558628 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558642 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558657 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558724 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558739 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558753 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558790 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558805 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558823 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558838 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558877 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558891 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558907 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558921 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558959 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558974 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.558989 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559003 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559038 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559053 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559067 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559080 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559093 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559139 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559154 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559169 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559208 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559224 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559239 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559253 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559296 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559311 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559327 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559343 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559438 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559459 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559477 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559532 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559548 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559562 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559576 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559612 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559632 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559646 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559660 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559695 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559709 4687 reconstruct.go:97] "Volume reconstruction finished" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.559719 4687 reconciler.go:26] "Reconciler: start to sync state" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.566949 4687 manager.go:324] Recovery completed Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.580781 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.583329 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.583366 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.583377 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.584282 4687 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.584292 4687 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.584362 4687 state_mem.go:36] "Initialized new in-memory state store" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.594604 4687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.596623 4687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.602152 4687 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.602198 4687 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.602256 4687 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 06:42:55 crc kubenswrapper[4687]: W0131 06:42:55.602909 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.602990 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.635845 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.703134 4687 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.736440 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.736985 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="400ms" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.837483 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.845049 4687 policy_none.go:49] "None policy: Start" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.846630 4687 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.846671 4687 state_mem.go:35] "Initializing new in-memory state store" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.903431 4687 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.920797 4687 manager.go:334] "Starting Device Plugin manager" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.920874 4687 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.920894 4687 server.go:79] "Starting device plugin registration server" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.921429 4687 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.921458 4687 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.921661 4687 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.921779 4687 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 06:42:55 crc kubenswrapper[4687]: I0131 06:42:55.921791 4687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 06:42:55 crc kubenswrapper[4687]: E0131 06:42:55.930507 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.022562 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.024112 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.024150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.024160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.024186 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:42:56 crc kubenswrapper[4687]: E0131 06:42:56.024587 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.23:6443: connect: connection refused" node="crc" Jan 31 06:42:56 crc kubenswrapper[4687]: E0131 06:42:56.137831 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="800ms" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.225060 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.226116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.226146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.226154 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.226174 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:42:56 crc kubenswrapper[4687]: E0131 06:42:56.226592 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.23:6443: connect: connection refused" node="crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.304257 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.304513 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.306172 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.306209 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.306219 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.306364 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.306677 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.306727 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307156 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307218 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307369 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307496 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307529 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307747 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.307763 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308245 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308273 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308323 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308341 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308351 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308474 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308579 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.308606 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.309301 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.309321 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.309330 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.309440 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.309488 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.309517 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310050 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310034 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310117 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310131 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310233 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310279 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310303 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.310970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.368867 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.368982 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369011 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369039 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369062 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369086 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369136 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369251 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369300 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369337 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369360 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369383 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369448 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369488 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.369531 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.470864 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.470946 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.470977 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471028 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471087 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471109 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471130 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471154 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471166 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471215 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471278 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471309 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471315 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471223 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471168 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471340 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471238 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471120 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471405 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471487 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471518 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.471755 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: W0131 06:42:56.476985 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:56 crc kubenswrapper[4687]: E0131 06:42:56.477267 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:56 crc kubenswrapper[4687]: W0131 06:42:56.477304 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:56 crc kubenswrapper[4687]: E0131 06:42:56.477397 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.531305 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.533448 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:47:52.169731655 +0000 UTC Jan 31 06:42:56 crc kubenswrapper[4687]: W0131 06:42:56.579826 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:56 crc kubenswrapper[4687]: E0131 06:42:56.579887 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.627593 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.629307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.629361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.629375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.629406 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:42:56 crc kubenswrapper[4687]: E0131 06:42:56.630068 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.23:6443: connect: connection refused" node="crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.650704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.660811 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.683258 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.715309 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: I0131 06:42:56.722553 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 06:42:56 crc kubenswrapper[4687]: W0131 06:42:56.802752 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-aeee81a35fa0c391d5ce7b1b0d58e9406c1cbd330fa2221ab46db5742546f7ce WatchSource:0}: Error finding container aeee81a35fa0c391d5ce7b1b0d58e9406c1cbd330fa2221ab46db5742546f7ce: Status 404 returned error can't find the container with id aeee81a35fa0c391d5ce7b1b0d58e9406c1cbd330fa2221ab46db5742546f7ce Jan 31 06:42:56 crc kubenswrapper[4687]: W0131 06:42:56.804886 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-39bcb03e4783eb2297704644c21eeff4829e6588a61405da684332f2db5517bc WatchSource:0}: Error finding container 39bcb03e4783eb2297704644c21eeff4829e6588a61405da684332f2db5517bc: Status 404 returned error can't find the container with id 39bcb03e4783eb2297704644c21eeff4829e6588a61405da684332f2db5517bc Jan 31 06:42:56 crc kubenswrapper[4687]: W0131 06:42:56.810694 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2f1f1f6e0843dead14ba20015d91a3f3c347058afa45f14aee0f0752fd00a76e WatchSource:0}: Error finding container 2f1f1f6e0843dead14ba20015d91a3f3c347058afa45f14aee0f0752fd00a76e: Status 404 returned error can't find the container with id 2f1f1f6e0843dead14ba20015d91a3f3c347058afa45f14aee0f0752fd00a76e Jan 31 06:42:56 crc kubenswrapper[4687]: W0131 06:42:56.811457 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0d5fdab70c40947f281411d4310dbc9978a7e05cca8acd74699f67ac55b8d1c4 WatchSource:0}: Error finding container 0d5fdab70c40947f281411d4310dbc9978a7e05cca8acd74699f67ac55b8d1c4: Status 404 returned error can't find the container with id 0d5fdab70c40947f281411d4310dbc9978a7e05cca8acd74699f67ac55b8d1c4 Jan 31 06:42:56 crc kubenswrapper[4687]: W0131 06:42:56.814368 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6d9d13aaa75293238b57633d8ac9859980a99edf5cb3738c1925971f3f582348 WatchSource:0}: Error finding container 6d9d13aaa75293238b57633d8ac9859980a99edf5cb3738c1925971f3f582348: Status 404 returned error can't find the container with id 6d9d13aaa75293238b57633d8ac9859980a99edf5cb3738c1925971f3f582348 Jan 31 06:42:56 crc kubenswrapper[4687]: E0131 06:42:56.939461 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="1.6s" Jan 31 06:42:57 crc kubenswrapper[4687]: W0131 06:42:57.025903 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:57 crc kubenswrapper[4687]: E0131 06:42:57.025988 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.431137 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.432714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.432778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.432802 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.432846 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:42:57 crc kubenswrapper[4687]: E0131 06:42:57.433522 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.23:6443: connect: connection refused" node="crc" Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.447102 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 06:42:57 crc kubenswrapper[4687]: E0131 06:42:57.448631 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.530714 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.533821 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:37:23.056728091 +0000 UTC Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.610004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aeee81a35fa0c391d5ce7b1b0d58e9406c1cbd330fa2221ab46db5742546f7ce"} Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.611554 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6d9d13aaa75293238b57633d8ac9859980a99edf5cb3738c1925971f3f582348"} Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.613136 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d5fdab70c40947f281411d4310dbc9978a7e05cca8acd74699f67ac55b8d1c4"} Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.614677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2f1f1f6e0843dead14ba20015d91a3f3c347058afa45f14aee0f0752fd00a76e"} Jan 31 06:42:57 crc kubenswrapper[4687]: I0131 06:42:57.616210 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39bcb03e4783eb2297704644c21eeff4829e6588a61405da684332f2db5517bc"} Jan 31 06:42:58 crc kubenswrapper[4687]: I0131 06:42:58.530738 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:58 crc kubenswrapper[4687]: I0131 06:42:58.534762 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:16:48.260605944 +0000 UTC Jan 31 06:42:58 crc kubenswrapper[4687]: E0131 06:42:58.540734 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="3.2s" Jan 31 06:42:58 crc kubenswrapper[4687]: W0131 06:42:58.934582 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:58 crc kubenswrapper[4687]: E0131 06:42:58.935042 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:59 crc kubenswrapper[4687]: I0131 06:42:59.034132 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:42:59 crc kubenswrapper[4687]: I0131 06:42:59.036551 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:42:59 crc kubenswrapper[4687]: I0131 06:42:59.036590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:42:59 crc kubenswrapper[4687]: I0131 06:42:59.036600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:42:59 crc kubenswrapper[4687]: I0131 06:42:59.036626 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:42:59 crc kubenswrapper[4687]: E0131 06:42:59.037184 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.23:6443: connect: connection refused" node="crc" Jan 31 06:42:59 crc kubenswrapper[4687]: W0131 06:42:59.111900 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:59 crc kubenswrapper[4687]: E0131 06:42:59.112354 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:59 crc kubenswrapper[4687]: W0131 06:42:59.231944 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:59 crc kubenswrapper[4687]: E0131 06:42:59.232343 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:42:59 crc kubenswrapper[4687]: I0131 06:42:59.531016 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:59 crc kubenswrapper[4687]: I0131 06:42:59.535144 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:48:41.003215695 +0000 UTC Jan 31 06:42:59 crc kubenswrapper[4687]: W0131 06:42:59.564854 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:42:59 crc kubenswrapper[4687]: E0131 06:42:59.564952 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:43:00 crc kubenswrapper[4687]: I0131 06:43:00.530477 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:00 crc kubenswrapper[4687]: I0131 06:43:00.535484 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:53:55.868287778 +0000 UTC Jan 31 06:43:00 crc kubenswrapper[4687]: I0131 06:43:00.627939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97"} Jan 31 06:43:00 crc kubenswrapper[4687]: I0131 06:43:00.629651 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183"} Jan 31 06:43:00 crc kubenswrapper[4687]: I0131 06:43:00.631513 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564"} Jan 31 06:43:00 crc kubenswrapper[4687]: I0131 06:43:00.632762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5e0c8f592467b911ddb1f973d61e7d2c044ae3d857e34ea7fa92e24f0c47ec3"} Jan 31 06:43:00 crc kubenswrapper[4687]: I0131 06:43:00.633955 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79"} Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.531147 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.536198 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:29:57.538657966 +0000 UTC Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.638015 4687 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79" exitCode=0 Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.638101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79"} Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.638174 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.640763 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b5e0c8f592467b911ddb1f973d61e7d2c044ae3d857e34ea7fa92e24f0c47ec3" exitCode=0 Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.640809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b5e0c8f592467b911ddb1f973d61e7d2c044ae3d857e34ea7fa92e24f0c47ec3"} Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.640907 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.640937 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.640781 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641547 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641819 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641893 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.641906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.642249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.642274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.642283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:01 crc kubenswrapper[4687]: E0131 06:43:01.741911 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="6.4s" Jan 31 06:43:01 crc kubenswrapper[4687]: I0131 06:43:01.753446 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 06:43:01 crc kubenswrapper[4687]: E0131 06:43:01.754236 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:43:01 crc kubenswrapper[4687]: E0131 06:43:01.995610 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.23:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fbdb7c2fcdc83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 06:42:55.527910531 +0000 UTC m=+1.805170116,LastTimestamp:2026-01-31 06:42:55.527910531 +0000 UTC m=+1.805170116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.237334 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.239117 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.239169 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.239183 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.239210 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:43:02 crc kubenswrapper[4687]: E0131 06:43:02.239870 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.23:6443: connect: connection refused" node="crc" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.530896 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.537372 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:45:42.920044478 +0000 UTC Jan 31 06:43:02 crc kubenswrapper[4687]: W0131 06:43:02.558864 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:02 crc kubenswrapper[4687]: E0131 06:43:02.558970 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.644247 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183" exitCode=0 Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.644298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183"} Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.644467 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.645269 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.645312 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:02 crc kubenswrapper[4687]: I0131 06:43:02.645332 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:03 crc kubenswrapper[4687]: W0131 06:43:03.274216 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:03 crc kubenswrapper[4687]: E0131 06:43:03.274356 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.531321 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.537579 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:40:32.67756559 +0000 UTC Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.646504 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564" exitCode=0 Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.646549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564"} Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.646706 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.647699 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.647729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.647745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.647983 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9"} Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.649113 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.650031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.650063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:03 crc kubenswrapper[4687]: I0131 06:43:03.650075 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:04 crc kubenswrapper[4687]: I0131 06:43:04.531264 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:04 crc kubenswrapper[4687]: I0131 06:43:04.538546 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:22:57.77587849 +0000 UTC Jan 31 06:43:04 crc kubenswrapper[4687]: I0131 06:43:04.653143 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ed7f156538831ab606fc24563177014cb2ebb140d38cf8809e3af8b17a64c548" exitCode=0 Jan 31 06:43:04 crc kubenswrapper[4687]: I0131 06:43:04.653226 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ed7f156538831ab606fc24563177014cb2ebb140d38cf8809e3af8b17a64c548"} Jan 31 06:43:04 crc kubenswrapper[4687]: I0131 06:43:04.655846 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"27d1e2abd6e78dbd3c9966ee4a6b5ae79d62dc6abacfbafdf7ebd9073e41ac56"} Jan 31 06:43:04 crc kubenswrapper[4687]: I0131 06:43:04.657832 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e"} Jan 31 06:43:04 crc kubenswrapper[4687]: W0131 06:43:04.804571 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:04 crc kubenswrapper[4687]: E0131 06:43:04.804673 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:43:05 crc kubenswrapper[4687]: W0131 06:43:05.242849 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:05 crc kubenswrapper[4687]: E0131 06:43:05.242958 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:43:05 crc kubenswrapper[4687]: I0131 06:43:05.531278 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:05 crc kubenswrapper[4687]: I0131 06:43:05.538745 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:04:31.650290383 +0000 UTC Jan 31 06:43:05 crc kubenswrapper[4687]: I0131 06:43:05.661787 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029"} Jan 31 06:43:05 crc kubenswrapper[4687]: I0131 06:43:05.661833 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:05 crc kubenswrapper[4687]: I0131 06:43:05.662502 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:05 crc kubenswrapper[4687]: I0131 06:43:05.662531 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:05 crc kubenswrapper[4687]: I0131 06:43:05.662543 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:05 crc kubenswrapper[4687]: E0131 06:43:05.931677 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.531393 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.539729 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:30:04.832621457 +0000 UTC Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.668739 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef"} Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.671305 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580"} Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.673806 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.674286 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e"} Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.674632 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.674656 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:06 crc kubenswrapper[4687]: I0131 06:43:06.674665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.530987 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.540097 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:41:45.044443375 +0000 UTC Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.680040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e"} Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.683467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4"} Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.685987 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="87d53b53feb14fb79c0e2a976021459a6662af87f8d700386477ebe8f9837f42" exitCode=0 Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.686082 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"87d53b53feb14fb79c0e2a976021459a6662af87f8d700386477ebe8f9837f42"} Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.686390 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.688382 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.688497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.688527 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.689943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe"} Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.690025 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.690837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.690876 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:07 crc kubenswrapper[4687]: I0131 06:43:07.690893 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:08 crc kubenswrapper[4687]: E0131 06:43:08.144818 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="7s" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.531086 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.540860 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:01:21.360476826 +0000 UTC Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.603556 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.640178 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.641719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.641771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.641784 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.641814 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:43:08 crc kubenswrapper[4687]: E0131 06:43:08.642210 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.23:6443: connect: connection refused" node="crc" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.694399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6c45b5bfe32874defddedac767bd3036e9eaf7d5ba834c72df4794f02f2c5b98"} Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.694471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2534273656fd36d31225ae4887c84efb05bb47dada0112a741fc54e98b526084"} Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.697821 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048"} Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.697875 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.697891 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.697945 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.697875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae"} Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.698859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.698887 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.698896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.699364 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.699392 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.699448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.699463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.699400 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:08 crc kubenswrapper[4687]: I0131 06:43:08.699521 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.039100 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.039536 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.039575 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.531712 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.541190 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:13:41.750894335 +0000 UTC Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.709360 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"25e9c2fbc1edcab9febc9b058bcc455ca1285f437802a1309fc03eda8568fe9d"} Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.709639 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.709720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d1ff47b0d742f3d52f844b6fb1dd5e246f7c4bb9a73efbc92658996e0359c451"} Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.709800 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6dd1f7483763e9d97523d97e0140717ed7235ef957ccaf55487009d05af1062"} Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.709507 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.709498 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.711044 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.711098 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.711112 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.711168 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.711199 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:09 crc kubenswrapper[4687]: I0131 06:43:09.711211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.522045 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 06:43:10 crc kubenswrapper[4687]: E0131 06:43:10.523516 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.23:6443: connect: connection refused" logger="UnhandledError" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.531528 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.23:6443: connect: connection refused Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.542017 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:20:00.131757861 +0000 UTC Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.713531 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.715391 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048" exitCode=255 Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.715499 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048"} Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.715613 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.715645 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.716567 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.716602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.716612 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.716884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.716953 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.716980 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:10 crc kubenswrapper[4687]: I0131 06:43:10.717902 4687 scope.go:117] "RemoveContainer" containerID="344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.191834 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.543006 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 15:22:44.912047005 +0000 UTC Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.720232 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.721790 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.721812 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.721786 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45"} Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.722796 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.722837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.722849 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.723385 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.723412 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.723441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:11 crc kubenswrapper[4687]: I0131 06:43:11.807842 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.544687 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:32:02.918905113 +0000 UTC Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.723977 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.724686 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.724783 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.725394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.725448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.725460 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.725932 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.725981 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:12 crc kubenswrapper[4687]: I0131 06:43:12.726002 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.085143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.085349 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.086549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.086619 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.086638 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.121501 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.201931 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.544889 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:18:24.332139435 +0000 UTC Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.726485 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.726776 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.727601 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.727671 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.727688 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.727808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.727859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.727873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:13 crc kubenswrapper[4687]: I0131 06:43:13.732288 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:14 crc kubenswrapper[4687]: I0131 06:43:14.179632 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:14 crc kubenswrapper[4687]: I0131 06:43:14.546503 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:45:40.595931618 +0000 UTC Jan 31 06:43:14 crc kubenswrapper[4687]: I0131 06:43:14.729165 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:14 crc kubenswrapper[4687]: I0131 06:43:14.730383 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:14 crc kubenswrapper[4687]: I0131 06:43:14.730424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:14 crc kubenswrapper[4687]: I0131 06:43:14.730470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.546923 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:12:50.687288515 +0000 UTC Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.642325 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.643664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.643712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.643729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.643785 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.731662 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.732582 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.732742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:15 crc kubenswrapper[4687]: I0131 06:43:15.732872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:15 crc kubenswrapper[4687]: E0131 06:43:15.931813 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 06:43:16 crc kubenswrapper[4687]: I0131 06:43:16.202880 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 06:43:16 crc kubenswrapper[4687]: I0131 06:43:16.203027 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 06:43:16 crc kubenswrapper[4687]: I0131 06:43:16.547265 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:28:51.507813752 +0000 UTC Jan 31 06:43:16 crc kubenswrapper[4687]: I0131 06:43:16.787341 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:43:16 crc kubenswrapper[4687]: I0131 06:43:16.787559 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:16 crc kubenswrapper[4687]: I0131 06:43:16.788812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:16 crc kubenswrapper[4687]: I0131 06:43:16.788892 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:16 crc kubenswrapper[4687]: I0131 06:43:16.788916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:17 crc kubenswrapper[4687]: I0131 06:43:17.547651 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:46:22.726424625 +0000 UTC Jan 31 06:43:18 crc kubenswrapper[4687]: I0131 06:43:18.237249 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 06:43:18 crc kubenswrapper[4687]: I0131 06:43:18.237326 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 06:43:18 crc kubenswrapper[4687]: I0131 06:43:18.548301 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:46:01.578396264 +0000 UTC Jan 31 06:43:19 crc kubenswrapper[4687]: I0131 06:43:19.045754 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]log ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]etcd ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/priority-and-fairness-filter ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-apiextensions-informers ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-apiextensions-controllers ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/crd-informer-synced ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-system-namespaces-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 31 06:43:19 crc kubenswrapper[4687]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 31 06:43:19 crc kubenswrapper[4687]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/bootstrap-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/start-kube-aggregator-informers ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/apiservice-registration-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/apiservice-discovery-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]autoregister-completion ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/apiservice-openapi-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 31 06:43:19 crc kubenswrapper[4687]: livez check failed Jan 31 06:43:19 crc kubenswrapper[4687]: I0131 06:43:19.045825 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:43:19 crc kubenswrapper[4687]: I0131 06:43:19.548648 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:38:40.746470552 +0000 UTC Jan 31 06:43:19 crc kubenswrapper[4687]: I0131 06:43:19.954664 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 06:43:19 crc kubenswrapper[4687]: I0131 06:43:19.954820 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:19 crc kubenswrapper[4687]: I0131 06:43:19.955954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:19 crc kubenswrapper[4687]: I0131 06:43:19.955988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:19 crc kubenswrapper[4687]: I0131 06:43:19.955996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:20 crc kubenswrapper[4687]: I0131 06:43:20.020676 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 06:43:20 crc kubenswrapper[4687]: I0131 06:43:20.549632 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:30:48.601698985 +0000 UTC Jan 31 06:43:20 crc kubenswrapper[4687]: I0131 06:43:20.743012 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:20 crc kubenswrapper[4687]: I0131 06:43:20.744237 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:20 crc kubenswrapper[4687]: I0131 06:43:20.744312 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:20 crc kubenswrapper[4687]: I0131 06:43:20.744336 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:20 crc kubenswrapper[4687]: I0131 06:43:20.756800 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 06:43:21 crc kubenswrapper[4687]: I0131 06:43:21.550321 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:08:04.939256897 +0000 UTC Jan 31 06:43:21 crc kubenswrapper[4687]: I0131 06:43:21.745475 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:21 crc kubenswrapper[4687]: I0131 06:43:21.746353 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:21 crc kubenswrapper[4687]: I0131 06:43:21.746484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:21 crc kubenswrapper[4687]: I0131 06:43:21.746512 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:22 crc kubenswrapper[4687]: I0131 06:43:22.551487 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:40:03.127038671 +0000 UTC Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.227560 4687 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.229692 4687 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.230470 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.231234 4687 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.231294 4687 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.231583 4687 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.301088 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.305691 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.520089 4687 apiserver.go:52] "Watching apiserver" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.539369 4687 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.539735 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.540189 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.540242 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.540294 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.540541 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.540569 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.540603 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.540626 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.540864 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.540974 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.543463 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.543668 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.544255 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.544748 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.544891 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.545328 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.546097 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.546333 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.548824 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.551834 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:37:38.006345018 +0000 UTC Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.572165 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.584187 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.595059 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.607808 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.617634 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.627544 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.635553 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.637687 4687 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.644657 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.657124 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.667198 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.677612 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.692837 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.733933 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734009 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734034 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734056 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734080 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734100 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734124 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734142 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734164 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734183 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734205 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734225 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734242 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734260 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734276 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734321 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734342 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734365 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734407 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734458 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734479 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734506 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734524 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734527 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734564 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734653 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734696 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734713 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734736 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734830 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734855 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734875 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734890 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734877 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734902 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734905 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734971 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.734992 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735049 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735075 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735098 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735120 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735141 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735166 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735185 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735200 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735215 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735229 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735283 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735304 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735328 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735352 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735383 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735429 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735374 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735452 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735497 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735513 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735534 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735535 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735551 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735569 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735586 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735605 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735621 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735638 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735655 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735672 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735688 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735706 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735722 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735741 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735758 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735793 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735824 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735853 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735878 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735900 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735923 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735939 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735945 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735960 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735972 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.735982 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736039 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736070 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736091 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736109 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736129 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736149 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736168 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736192 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736217 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736243 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736266 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736316 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736347 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736348 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736373 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736395 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736450 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736502 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736529 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736555 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736580 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736604 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736635 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736665 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736690 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736756 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736782 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736818 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736846 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736869 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736894 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736919 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737014 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737042 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737067 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737090 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737113 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737137 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737159 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737182 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737207 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737230 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737254 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737277 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737303 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737329 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737353 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737379 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737403 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737448 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737475 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737497 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737524 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737549 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737571 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737587 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737602 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737636 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737653 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737676 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737703 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737727 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737786 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737814 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737838 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737886 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737935 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737958 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737981 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738003 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738053 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738076 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738101 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738124 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738150 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738174 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738199 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738224 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738260 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738283 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738307 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738332 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738357 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738376 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738400 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738526 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738556 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738581 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738611 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738644 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738671 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738693 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738765 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738793 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738818 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738841 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738864 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738888 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738933 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738980 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739029 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739055 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739080 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739105 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739164 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739193 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739222 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739322 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739349 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739404 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739483 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739538 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739640 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739659 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739673 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739687 4687 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739700 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739713 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739728 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739741 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.739755 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736193 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736448 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736554 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736636 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.736910 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737250 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737528 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.737916 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738313 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.738536 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740354 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740359 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740382 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740385 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740498 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740589 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740639 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.740657 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.741234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.741269 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.741640 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.741799 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.741976 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742070 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742137 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742169 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742398 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742392 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742400 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742458 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742474 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742527 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742534 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742497 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742614 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742863 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742874 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742975 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.743008 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.742900 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.743077 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.743590 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.743648 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.743755 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.744266 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.748484 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:43:24.248458806 +0000 UTC m=+30.525718441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.752110 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.752179 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:24.252165485 +0000 UTC m=+30.529425140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.753708 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.753755 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:24.253743123 +0000 UTC m=+30.531002788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.754127 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.753791 4687 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.756080 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.756170 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.756293 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.757969 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.758141 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.758545 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.758731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.758845 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.759153 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.759604 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.759977 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.760180 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.760352 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.760524 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.761842 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.762071 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.762084 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.763474 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.764124 4687 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.767877 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.768062 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.768385 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.771924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.772223 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.777229 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.783584 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.783756 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.783834 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.783995 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.784143 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.784230 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.784237 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.784345 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.784837 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.791151 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.791179 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.791196 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.791286 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:24.291255323 +0000 UTC m=+30.568514898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.791875 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.791898 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.791908 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:23 crc kubenswrapper[4687]: E0131 06:43:23.791940 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:24.291931839 +0000 UTC m=+30.569191414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.792795 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.792817 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.793081 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.793760 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.794556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.795044 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.796935 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.800778 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.803310 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.803743 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.803763 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.803848 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.803955 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.804050 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.804596 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.804712 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.805096 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.805476 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.805471 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.805664 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.805805 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.805861 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.805985 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.806591 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.806666 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.807228 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.809141 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.809469 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.809807 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.809941 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.810134 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.810640 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.811169 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.811346 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.811696 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.811829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.811853 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.811961 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.812007 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.812474 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.812037 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.812198 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.812369 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.812651 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.812840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.812970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.813135 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.813594 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.814026 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.814095 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.814178 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.814225 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.814316 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.814650 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.814793 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.815229 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.816043 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.816860 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.817012 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.817091 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.816540 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.817321 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.817465 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.817819 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.819913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.819988 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.823728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.824793 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.825252 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.825521 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.825685 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.825780 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.825993 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.826459 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.826864 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.827837 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.828050 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.831951 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.833503 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.834551 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.834848 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.835184 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.835195 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.835696 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.836177 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.836754 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.836739 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.837089 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.837327 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.837498 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.837636 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.838174 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.838970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840807 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840858 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840870 4687 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840880 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840889 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840898 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840907 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840917 4687 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840925 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840933 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840942 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840952 4687 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840973 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840987 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.840999 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841009 4687 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841019 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841030 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841042 4687 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841052 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841063 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841075 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841085 4687 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841094 4687 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841102 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841110 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841119 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841127 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841135 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841147 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841155 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841165 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841174 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841183 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841191 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841199 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841207 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841221 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841237 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841247 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841255 4687 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841264 4687 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841273 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841282 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841290 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841298 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841306 4687 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841317 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841324 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841332 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841340 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841348 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841356 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841364 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841372 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841380 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841388 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841395 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841409 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841434 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841444 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850126 4687 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850148 4687 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850159 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850168 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850186 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841510 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850195 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850300 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850318 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850335 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850351 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850364 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850377 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850395 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850423 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850438 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850454 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850467 4687 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850479 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850492 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850505 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850518 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850531 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850542 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850555 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850567 4687 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850578 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850589 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850601 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850613 4687 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850624 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850635 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850646 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850656 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850690 4687 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850702 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850713 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850724 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850735 4687 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850767 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850779 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850790 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850802 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850814 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850879 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850891 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850923 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850938 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850950 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850963 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.846242 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850974 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.846578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.846757 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851032 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851057 4687 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851091 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851106 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851119 4687 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851131 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851144 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851155 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851171 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851183 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851194 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851206 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851217 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851229 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851241 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851253 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851265 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851278 4687 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851290 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851305 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.841938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851317 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851332 4687 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851343 4687 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851355 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851368 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851380 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.850945 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851291 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851287 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851391 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851518 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851531 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851543 4687 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851555 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851567 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851579 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851592 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851606 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851618 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851628 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851641 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851652 4687 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851662 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851672 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851683 4687 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851694 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851705 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851714 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851724 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851734 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851744 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851754 4687 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851765 4687 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851776 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851786 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851795 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851807 4687 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851816 4687 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851825 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851838 4687 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851852 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851866 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.851847 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.853268 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.854251 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.862556 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.863865 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.863880 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.863991 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.869200 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.869303 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.877103 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.884259 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.886141 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.896930 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953100 4687 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953137 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953146 4687 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953155 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953163 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953175 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953184 4687 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953192 4687 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953200 4687 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953209 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953217 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953224 4687 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953233 4687 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953241 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953249 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:23 crc kubenswrapper[4687]: I0131 06:43:23.953257 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.048134 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.050250 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.050346 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.054553 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.059787 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.065518 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.079438 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.090922 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.106301 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.119126 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.129310 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.141844 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.155685 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.164547 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.174071 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.182435 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.190908 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.200121 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:10Z\\\",\\\"message\\\":\\\"W0131 06:43:08.664611 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 06:43:08.665131 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769841788 cert, and key in /tmp/serving-cert-191569390/serving-signer.crt, /tmp/serving-cert-191569390/serving-signer.key\\\\nI0131 06:43:09.616780 1 observer_polling.go:159] Starting file observer\\\\nW0131 06:43:09.644269 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 06:43:09.644488 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:09.651606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-191569390/tls.crt::/tmp/serving-cert-191569390/tls.key\\\\\\\"\\\\nF0131 06:43:10.068241 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.213027 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.222260 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.256358 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.256440 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.256480 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.256554 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.256608 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:25.256594045 +0000 UTC m=+31.533853620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.256828 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.256862 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:43:25.256843481 +0000 UTC m=+31.534103056 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.256891 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:25.256879662 +0000 UTC m=+31.534139237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.356767 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.356812 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.356937 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.356952 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.356972 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.357025 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:25.357008416 +0000 UTC m=+31.634267991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.357314 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.357325 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.357332 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.357354 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:25.357347924 +0000 UTC m=+31.634607499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.552285 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:42:10.317369367 +0000 UTC Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.606158 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.606250 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.692600 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.692684 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.761137 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ac830808937cac445358f6b861979d2ee706c78d3d74295e9f92b195da3bc1fe"} Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.763238 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff"} Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.763271 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f649ad620f11d063534284997d732c7129de1625fba55aea774d19f0598669c5"} Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.765245 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.765808 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.768185 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45" exitCode=255 Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.768252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45"} Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.768310 4687 scope.go:117] "RemoveContainer" containerID="344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.768732 4687 scope.go:117] "RemoveContainer" containerID="a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45" Jan 31 06:43:24 crc kubenswrapper[4687]: E0131 06:43:24.768901 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.771568 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95"} Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.771604 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0e236e24a871f819738ddc5dea400cf121af22e7b85c34d39cf5ff120202faaa"} Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.781393 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.791901 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.802684 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.817550 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.834183 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.843817 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.857880 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:10Z\\\",\\\"message\\\":\\\"W0131 06:43:08.664611 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 06:43:08.665131 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769841788 cert, and key in /tmp/serving-cert-191569390/serving-signer.crt, /tmp/serving-cert-191569390/serving-signer.key\\\\nI0131 06:43:09.616780 1 observer_polling.go:159] Starting file observer\\\\nW0131 06:43:09.644269 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 06:43:09.644488 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:09.651606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-191569390/tls.crt::/tmp/serving-cert-191569390/tls.key\\\\\\\"\\\\nF0131 06:43:10.068241 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.867167 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.878094 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.887885 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.898785 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:10Z\\\",\\\"message\\\":\\\"W0131 06:43:08.664611 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 06:43:08.665131 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769841788 cert, and key in /tmp/serving-cert-191569390/serving-signer.crt, /tmp/serving-cert-191569390/serving-signer.key\\\\nI0131 06:43:09.616780 1 observer_polling.go:159] Starting file observer\\\\nW0131 06:43:09.644269 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 06:43:09.644488 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:09.651606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-191569390/tls.crt::/tmp/serving-cert-191569390/tls.key\\\\\\\"\\\\nF0131 06:43:10.068241 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.907298 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.917925 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.930497 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.940590 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:24 crc kubenswrapper[4687]: I0131 06:43:24.951657 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.263269 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.263364 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.263394 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.263471 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:43:27.263452137 +0000 UTC m=+33.540711712 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.263550 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.263563 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.263631 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:27.263612601 +0000 UTC m=+33.540872176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.263648 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:27.263641392 +0000 UTC m=+33.540900967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.364071 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.364149 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.364268 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.364294 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.364304 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.364362 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:27.36434781 +0000 UTC m=+33.641607385 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.364362 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.364442 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.364469 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.364598 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:27.364539294 +0000 UTC m=+33.641798919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.553084 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:59:42.552859334 +0000 UTC Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.602711 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.602769 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.602833 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.602916 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.602992 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.603054 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.609361 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.610188 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.611473 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.612148 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.616505 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.617197 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.618037 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.619199 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.619909 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.620999 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.621559 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.622821 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.623372 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.623692 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.624092 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.625108 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.625695 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.626757 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.627182 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.627845 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.628958 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.629560 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.630778 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.631261 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.632461 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.632946 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.633675 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.635002 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.635578 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.636687 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.637218 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.638346 4687 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.638491 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.640352 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.641721 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.641975 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.642185 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.643895 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.644646 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.645728 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.646477 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.647646 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.648202 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.649345 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.650128 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.651343 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.651890 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.652953 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.653541 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.654847 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.655367 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.656524 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.657024 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.658076 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.658784 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.658860 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.659331 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.673458 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.691607 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://344a7604051ed2b5931403c4e5ed580aec5462a8d7d517cb6cd33902183ff048\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:10Z\\\",\\\"message\\\":\\\"W0131 06:43:08.664611 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 06:43:08.665131 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769841788 cert, and key in /tmp/serving-cert-191569390/serving-signer.crt, /tmp/serving-cert-191569390/serving-signer.key\\\\nI0131 06:43:09.616780 1 observer_polling.go:159] Starting file observer\\\\nW0131 06:43:09.644269 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 06:43:09.644488 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:09.651606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-191569390/tls.crt::/tmp/serving-cert-191569390/tls.key\\\\\\\"\\\\nF0131 06:43:10.068241 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.707432 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.722471 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.737272 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.775125 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.777278 4687 scope.go:117] "RemoveContainer" containerID="a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45" Jan 31 06:43:25 crc kubenswrapper[4687]: E0131 06:43:25.777462 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.778138 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5"} Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.789868 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.805997 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.825623 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.842893 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.859838 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.875310 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.890764 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.903812 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.917357 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.938847 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.954706 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.969784 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.984703 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:25 crc kubenswrapper[4687]: I0131 06:43:25.997594 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.013356 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.024688 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.553601 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:23:41.496112771 +0000 UTC Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.756718 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.774084 4687 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.781873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d"} Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.797465 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.814063 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.832701 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.848597 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.863698 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.877170 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.891173 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:26 crc kubenswrapper[4687]: I0131 06:43:26.904903 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:26Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.281382 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.281474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.281506 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.281575 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:43:31.281554067 +0000 UTC m=+37.558813642 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.281612 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.281618 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.281693 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:31.28165411 +0000 UTC m=+37.558913685 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.281710 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:31.281703501 +0000 UTC m=+37.558963076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.382338 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.382457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.382628 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.382662 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.382671 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.382689 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.382694 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.382710 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.382777 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:31.382752757 +0000 UTC m=+37.660012522 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.382808 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:31.382797178 +0000 UTC m=+37.660056983 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.519353 4687 csr.go:261] certificate signing request csr-twppz is approved, waiting to be issued Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.548812 4687 csr.go:257] certificate signing request csr-twppz is issued Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.553891 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:40:41.530583407 +0000 UTC Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.602709 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.602758 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.602845 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:27 crc kubenswrapper[4687]: I0131 06:43:27.602713 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.602969 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:27 crc kubenswrapper[4687]: E0131 06:43:27.603050 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.360649 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-sv5n6"] Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.361220 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sv5n6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.363024 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hkgkr"] Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.363233 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.363431 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.363564 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-77mzd"] Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.363835 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jlk4z"] Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.363928 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.364360 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.364555 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.365508 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.366120 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.366338 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.366793 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.366963 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.367188 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.367267 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.367399 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.367497 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.367758 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.367836 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.367931 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.368054 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.377999 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.390710 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-os-release\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.390945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c340f403-35a5-4c6d-80b0-2e0fe7399192-proxy-tls\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391076 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-system-cni-dir\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391160 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-hostroot\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391233 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-cnibin\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391320 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4tdc\" (UniqueName: \"kubernetes.io/projected/c340f403-35a5-4c6d-80b0-2e0fe7399192-kube-api-access-k4tdc\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96c21054-65ed-4db4-969f-bbb10f612772-cni-binary-copy\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-cni-bin\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391603 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-netns\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391679 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxb4b\" (UniqueName: \"kubernetes.io/projected/ad4abe4f-d012-452a-81cf-6e96ec9a8dea-kube-api-access-jxb4b\") pod \"node-resolver-sv5n6\" (UID: \"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\") " pod="openshift-dns/node-resolver-sv5n6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391757 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ad4abe4f-d012-452a-81cf-6e96ec9a8dea-hosts-file\") pod \"node-resolver-sv5n6\" (UID: \"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\") " pod="openshift-dns/node-resolver-sv5n6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391832 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-cnibin\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391903 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d57913d8-5742-4fd2-925b-6721231e7863-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.391975 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392051 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d57913d8-5742-4fd2-925b-6721231e7863-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392121 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-cni-multus\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392197 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-kubelet\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392275 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c340f403-35a5-4c6d-80b0-2e0fe7399192-mcd-auth-proxy-config\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392350 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkwv\" (UniqueName: \"kubernetes.io/projected/96c21054-65ed-4db4-969f-bbb10f612772-kube-api-access-pjkwv\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392446 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-os-release\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-multus-certs\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-etc-kubernetes\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392675 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-socket-dir-parent\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392741 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/96c21054-65ed-4db4-969f-bbb10f612772-multus-daemon-config\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392817 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-conf-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.392922 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c340f403-35a5-4c6d-80b0-2e0fe7399192-rootfs\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.393001 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drwlj\" (UniqueName: \"kubernetes.io/projected/d57913d8-5742-4fd2-925b-6721231e7863-kube-api-access-drwlj\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.393066 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-system-cni-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.393141 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-cni-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.393214 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-k8s-cni-cncf-io\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.394793 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.404747 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.418832 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.431372 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.443623 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.461494 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.473279 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.487140 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c340f403-35a5-4c6d-80b0-2e0fe7399192-rootfs\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drwlj\" (UniqueName: \"kubernetes.io/projected/d57913d8-5742-4fd2-925b-6721231e7863-kube-api-access-drwlj\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-system-cni-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494660 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-cni-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494685 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-k8s-cni-cncf-io\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-os-release\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494719 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c340f403-35a5-4c6d-80b0-2e0fe7399192-proxy-tls\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494734 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-system-cni-dir\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-hostroot\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494764 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-cnibin\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-cni-bin\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494794 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4tdc\" (UniqueName: \"kubernetes.io/projected/c340f403-35a5-4c6d-80b0-2e0fe7399192-kube-api-access-k4tdc\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494793 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c340f403-35a5-4c6d-80b0-2e0fe7399192-rootfs\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494808 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96c21054-65ed-4db4-969f-bbb10f612772-cni-binary-copy\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494925 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-netns\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxb4b\" (UniqueName: \"kubernetes.io/projected/ad4abe4f-d012-452a-81cf-6e96ec9a8dea-kube-api-access-jxb4b\") pod \"node-resolver-sv5n6\" (UID: \"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\") " pod="openshift-dns/node-resolver-sv5n6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.494998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ad4abe4f-d012-452a-81cf-6e96ec9a8dea-hosts-file\") pod \"node-resolver-sv5n6\" (UID: \"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\") " pod="openshift-dns/node-resolver-sv5n6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495021 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-cnibin\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495042 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d57913d8-5742-4fd2-925b-6721231e7863-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495065 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495088 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d57913d8-5742-4fd2-925b-6721231e7863-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495111 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-cni-multus\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-kubelet\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495155 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c340f403-35a5-4c6d-80b0-2e0fe7399192-mcd-auth-proxy-config\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjkwv\" (UniqueName: \"kubernetes.io/projected/96c21054-65ed-4db4-969f-bbb10f612772-kube-api-access-pjkwv\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495200 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-os-release\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-multus-certs\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495242 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-etc-kubernetes\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-socket-dir-parent\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495281 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-system-cni-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495294 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/96c21054-65ed-4db4-969f-bbb10f612772-multus-daemon-config\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495338 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-conf-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495363 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-system-cni-dir\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495343 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96c21054-65ed-4db4-969f-bbb10f612772-cni-binary-copy\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495391 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-hostroot\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495440 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-cnibin\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495453 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-k8s-cni-cncf-io\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-cni-bin\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-cni-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495565 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-cni-multus\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495621 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-cnibin\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-netns\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495756 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-os-release\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495794 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ad4abe4f-d012-452a-81cf-6e96ec9a8dea-hosts-file\") pod \"node-resolver-sv5n6\" (UID: \"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\") " pod="openshift-dns/node-resolver-sv5n6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495805 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-os-release\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495832 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-var-lib-kubelet\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495935 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/96c21054-65ed-4db4-969f-bbb10f612772-multus-daemon-config\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.495975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-host-run-multus-certs\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.496003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-etc-kubernetes\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.496045 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-socket-dir-parent\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.496447 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d57913d8-5742-4fd2-925b-6721231e7863-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.496493 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96c21054-65ed-4db4-969f-bbb10f612772-multus-conf-dir\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.496558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c340f403-35a5-4c6d-80b0-2e0fe7399192-mcd-auth-proxy-config\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.496783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d57913d8-5742-4fd2-925b-6721231e7863-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.502249 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.503938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d57913d8-5742-4fd2-925b-6721231e7863-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.506701 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c340f403-35a5-4c6d-80b0-2e0fe7399192-proxy-tls\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.515093 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drwlj\" (UniqueName: \"kubernetes.io/projected/d57913d8-5742-4fd2-925b-6721231e7863-kube-api-access-drwlj\") pod \"multus-additional-cni-plugins-jlk4z\" (UID: \"d57913d8-5742-4fd2-925b-6721231e7863\") " pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.516102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxb4b\" (UniqueName: \"kubernetes.io/projected/ad4abe4f-d012-452a-81cf-6e96ec9a8dea-kube-api-access-jxb4b\") pod \"node-resolver-sv5n6\" (UID: \"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\") " pod="openshift-dns/node-resolver-sv5n6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.516174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjkwv\" (UniqueName: \"kubernetes.io/projected/96c21054-65ed-4db4-969f-bbb10f612772-kube-api-access-pjkwv\") pod \"multus-77mzd\" (UID: \"96c21054-65ed-4db4-969f-bbb10f612772\") " pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.517053 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4tdc\" (UniqueName: \"kubernetes.io/projected/c340f403-35a5-4c6d-80b0-2e0fe7399192-kube-api-access-k4tdc\") pod \"machine-config-daemon-hkgkr\" (UID: \"c340f403-35a5-4c6d-80b0-2e0fe7399192\") " pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.524782 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.541004 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.550062 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 06:38:27 +0000 UTC, rotation deadline is 2026-10-22 02:44:13.419155227 +0000 UTC Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.550100 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6332h0m44.869057794s for next certificate rotation Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.554262 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:21:46.07060834 +0000 UTC Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.555502 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.571637 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.586297 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.599930 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.612656 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.625336 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.637585 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.649331 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.661971 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.677264 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sv5n6" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.683298 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.690876 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-77mzd" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.696326 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" Jan 31 06:43:28 crc kubenswrapper[4687]: W0131 06:43:28.718126 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd57913d8_5742_4fd2_925b_6721231e7863.slice/crio-41ceb87ee40b40c465ba3368061e51f9969eaa172675cc87d05d86712f1355fb WatchSource:0}: Error finding container 41ceb87ee40b40c465ba3368061e51f9969eaa172675cc87d05d86712f1355fb: Status 404 returned error can't find the container with id 41ceb87ee40b40c465ba3368061e51f9969eaa172675cc87d05d86712f1355fb Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.748260 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zvpgn"] Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.749168 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.751785 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.751817 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.757851 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.759115 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.759353 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.759431 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.759515 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.785944 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.792631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sv5n6" event={"ID":"ad4abe4f-d012-452a-81cf-6e96ec9a8dea","Type":"ContainerStarted","Data":"bb163f05538c94f735a915fba329a2d9ef28f02416463efe9e7c6fb756899113"} Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.796943 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-config\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.796987 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-systemd\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797016 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-systemd-units\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797042 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797077 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-node-log\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797103 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-netns\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797125 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-bin\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797160 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-slash\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797183 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-ovn\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797205 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-env-overrides\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797227 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-var-lib-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-script-lib\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797275 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-log-socket\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797300 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-netd\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797401 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797469 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9ts2\" (UniqueName: \"kubernetes.io/projected/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-kube-api-access-w9ts2\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797520 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-kubelet\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797554 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovn-node-metrics-cert\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.797630 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-etc-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.800508 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerStarted","Data":"41ceb87ee40b40c465ba3368061e51f9969eaa172675cc87d05d86712f1355fb"} Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.801947 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77mzd" event={"ID":"96c21054-65ed-4db4-969f-bbb10f612772","Type":"ContainerStarted","Data":"8d299d368fd258d619727bf45398a2f86188db2329017c97c543baeeec3307f3"} Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.809794 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"42e3cf94085f1b1fb49d17053fcf5a5fc510ca4cbdb6169e335d0ccd399eaed7"} Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.811548 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.828531 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.846651 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.859781 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.876761 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.889939 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898269 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-slash\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898330 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-ovn\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898358 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-env-overrides\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-var-lib-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898424 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-script-lib\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-log-socket\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898471 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-netd\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898491 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898492 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-slash\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898519 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-kubelet\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovn-node-metrics-cert\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9ts2\" (UniqueName: \"kubernetes.io/projected/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-kube-api-access-w9ts2\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898587 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-etc-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-systemd\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898699 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-config\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898725 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-systemd-units\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898779 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898806 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-node-log\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-netns\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898888 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-bin\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.898984 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-bin\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899057 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899113 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-systemd\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-systemd-units\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899640 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899693 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-node-log\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899718 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-netns\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899761 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-script-lib\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899788 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-log-socket\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899767 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.899933 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-netd\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.900064 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-config\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.900109 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-env-overrides\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.900135 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-var-lib-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.900157 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-ovn\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.900167 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-kubelet\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.901286 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-etc-openvswitch\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.903723 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.909763 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovn-node-metrics-cert\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.919565 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.920619 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9ts2\" (UniqueName: \"kubernetes.io/projected/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-kube-api-access-w9ts2\") pod \"ovnkube-node-zvpgn\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.936675 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.951172 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.969545 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:28 crc kubenswrapper[4687]: I0131 06:43:28.997490 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:28Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.067709 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:29 crc kubenswrapper[4687]: W0131 06:43:29.186352 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55484aa7_5d82_4f2e_ab22_2ceae9c90c96.slice/crio-c3e382f166a625460737379a3a5a2eea8a04d3ee45bb6a7050109432c7bf2b43 WatchSource:0}: Error finding container c3e382f166a625460737379a3a5a2eea8a04d3ee45bb6a7050109432c7bf2b43: Status 404 returned error can't find the container with id c3e382f166a625460737379a3a5a2eea8a04d3ee45bb6a7050109432c7bf2b43 Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.554874 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:39:19.562526905 +0000 UTC Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.603481 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.603508 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.603493 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:29 crc kubenswrapper[4687]: E0131 06:43:29.603633 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:29 crc kubenswrapper[4687]: E0131 06:43:29.603734 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:29 crc kubenswrapper[4687]: E0131 06:43:29.603789 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.813252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sv5n6" event={"ID":"ad4abe4f-d012-452a-81cf-6e96ec9a8dea","Type":"ContainerStarted","Data":"1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a"} Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.814757 4687 generic.go:334] "Generic (PLEG): container finished" podID="d57913d8-5742-4fd2-925b-6721231e7863" containerID="bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02" exitCode=0 Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.815222 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerDied","Data":"bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02"} Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.816470 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77mzd" event={"ID":"96c21054-65ed-4db4-969f-bbb10f612772","Type":"ContainerStarted","Data":"8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7"} Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.818772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51"} Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.818794 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a"} Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.820275 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8" exitCode=0 Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.820298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.820360 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"c3e382f166a625460737379a3a5a2eea8a04d3ee45bb6a7050109432c7bf2b43"} Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.829827 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.851804 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.874637 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.892970 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.906918 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.920385 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.935312 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.952663 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.966115 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:29 crc kubenswrapper[4687]: I0131 06:43:29.989686 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:29Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.006564 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.022142 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.034443 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.049759 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.067466 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.091359 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.107861 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.127305 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.142724 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.157430 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.173580 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.188334 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.207711 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.220906 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.230960 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.232791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.232856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.232869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.233356 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.236623 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.241843 4687 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.242139 4687 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.243350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.243430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.243446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.243463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.243474 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.254879 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: E0131 06:43:30.262916 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.266359 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.266398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.266431 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.266451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.266463 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: E0131 06:43:30.277890 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.281479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.281513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.281522 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.281534 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.281543 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: E0131 06:43:30.293359 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.298160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.298200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.298208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.298222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.298231 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: E0131 06:43:30.309627 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.316715 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.316755 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.316767 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.316783 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.316801 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: E0131 06:43:30.328987 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: E0131 06:43:30.329142 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.330450 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.330480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.330494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.330513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.330524 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.432989 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.433029 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.433037 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.433051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.433060 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.537855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.537901 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.537914 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.537995 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.538013 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.555163 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:36:09.642949113 +0000 UTC Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.620373 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bfpqq"] Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.620792 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.622367 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.625506 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.625520 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.625953 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.637774 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.640351 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.640393 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.640440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.640459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.640475 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.652106 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.668218 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.681645 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.695398 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.711238 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.717359 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nq48\" (UniqueName: \"kubernetes.io/projected/83663f48-cbeb-4689-ad08-405a1d894791-kube-api-access-6nq48\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.717424 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/83663f48-cbeb-4689-ad08-405a1d894791-serviceca\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.717478 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83663f48-cbeb-4689-ad08-405a1d894791-host\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.724645 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.738901 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.743044 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.743090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.743099 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.743114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.743125 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.752271 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.766426 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.784869 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.801471 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.818908 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83663f48-cbeb-4689-ad08-405a1d894791-host\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.819005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nq48\" (UniqueName: \"kubernetes.io/projected/83663f48-cbeb-4689-ad08-405a1d894791-kube-api-access-6nq48\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.819045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/83663f48-cbeb-4689-ad08-405a1d894791-serviceca\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.819074 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83663f48-cbeb-4689-ad08-405a1d894791-host\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.819989 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/83663f48-cbeb-4689-ad08-405a1d894791-serviceca\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.825235 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerStarted","Data":"b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.827006 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.829313 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.829363 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.829379 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.829391 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.829406 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.829437 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.845652 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.845941 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.845951 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.845965 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.845996 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.846565 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.849122 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nq48\" (UniqueName: \"kubernetes.io/projected/83663f48-cbeb-4689-ad08-405a1d894791-kube-api-access-6nq48\") pod \"node-ca-bfpqq\" (UID: \"83663f48-cbeb-4689-ad08-405a1d894791\") " pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.863080 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.875070 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.888029 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.916401 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.933644 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bfpqq" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.949319 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.957080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.957282 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.957382 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.957500 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.957585 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:30Z","lastTransitionTime":"2026-01-31T06:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.970436 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:30 crc kubenswrapper[4687]: I0131 06:43:30.986126 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:30Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.002226 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.016985 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.036808 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.049013 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.059579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.059610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.059618 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.059631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.059640 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.063817 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.085539 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.098728 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.165728 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.165767 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.165778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.165794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.165808 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.268595 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.268806 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.268814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.268826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.268835 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.325315 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.325397 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.325445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.325571 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.325595 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:43:39.325551355 +0000 UTC m=+45.602811160 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.325643 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:39.325626007 +0000 UTC m=+45.602885582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.325753 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.325863 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:39.325839242 +0000 UTC m=+45.603098817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.371524 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.371570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.371579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.371594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.371604 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.426833 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.426924 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.427022 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.427043 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.427057 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.427022 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.427129 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.427140 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.427112 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:39.427093663 +0000 UTC m=+45.704353238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.427188 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:39.427177655 +0000 UTC m=+45.704437240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.474331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.474377 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.474387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.474420 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.474433 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.556248 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:59:52.159072095 +0000 UTC Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.577114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.577150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.577160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.577184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.577194 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.602830 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.602888 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.602917 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.602948 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.603043 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:31 crc kubenswrapper[4687]: E0131 06:43:31.603076 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.679786 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.679826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.679837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.679854 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.679867 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.782578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.782925 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.782937 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.782955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.782966 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.833751 4687 generic.go:334] "Generic (PLEG): container finished" podID="d57913d8-5742-4fd2-925b-6721231e7863" containerID="b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd" exitCode=0 Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.833824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerDied","Data":"b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.835102 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bfpqq" event={"ID":"83663f48-cbeb-4689-ad08-405a1d894791","Type":"ContainerStarted","Data":"e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.835125 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bfpqq" event={"ID":"83663f48-cbeb-4689-ad08-405a1d894791","Type":"ContainerStarted","Data":"91996b72089ca26dd8d721df97f948cf36558982bc0315bbe4e128fdaf7f5885"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.849343 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.863148 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.876328 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.884946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.884977 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.884986 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.884999 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.885009 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.894218 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.910158 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.923440 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.936307 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.951139 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.966295 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.980839 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.987182 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.987227 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.987238 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.987254 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.987266 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:31Z","lastTransitionTime":"2026-01-31T06:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:31 crc kubenswrapper[4687]: I0131 06:43:31.994552 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:31Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.009237 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.026554 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.037618 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.050044 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.062238 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.076290 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.089144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.089182 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.089191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.089207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.089219 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.092655 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.101650 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.112083 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.123666 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.137933 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.149447 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.166107 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.181603 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.191319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.191400 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.191431 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.191448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.191459 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.193194 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.202289 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.212368 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.293435 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.293682 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.293697 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.293712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.293724 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.396064 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.396106 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.396115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.396129 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.396139 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.498554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.498613 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.498640 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.498663 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.498680 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.556556 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:22:23.484244582 +0000 UTC Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.601462 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.601697 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.601775 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.601853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.601940 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.704217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.704263 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.704278 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.704293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.704304 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.806008 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.806049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.806061 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.806078 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.806089 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.840253 4687 generic.go:334] "Generic (PLEG): container finished" podID="d57913d8-5742-4fd2-925b-6721231e7863" containerID="7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699" exitCode=0 Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.840294 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerDied","Data":"7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.855840 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.870874 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.887611 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.908092 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.908941 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.908971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.908979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.908997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.909008 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:32Z","lastTransitionTime":"2026-01-31T06:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.920671 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.932261 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.946246 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.960783 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.974365 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:32 crc kubenswrapper[4687]: I0131 06:43:32.987160 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.001924 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.011998 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.012038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.012047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.012062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.012073 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.015498 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.027472 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.036965 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.113853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.113900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.113912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.113929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.113941 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.216143 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.216186 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.216196 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.216212 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.216221 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.318845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.318882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.318891 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.318908 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.318918 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.421339 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.421381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.421392 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.421424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.421439 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.523939 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.523977 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.523989 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.524006 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.524016 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.557393 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:49:50.213931881 +0000 UTC Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.602790 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.602823 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.602788 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:33 crc kubenswrapper[4687]: E0131 06:43:33.602922 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:33 crc kubenswrapper[4687]: E0131 06:43:33.603054 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:33 crc kubenswrapper[4687]: E0131 06:43:33.603150 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.625565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.625603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.625612 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.625625 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.625634 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.728965 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.729016 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.729034 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.729055 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.729067 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.831877 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.831933 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.831947 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.831964 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.831976 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.849274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.852154 4687 generic.go:334] "Generic (PLEG): container finished" podID="d57913d8-5742-4fd2-925b-6721231e7863" containerID="3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241" exitCode=0 Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.852206 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerDied","Data":"3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.872343 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.896041 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.907767 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.921648 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.933755 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.933815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.933824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.933835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.933843 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:33Z","lastTransitionTime":"2026-01-31T06:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.934734 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.946613 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.959378 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.972751 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:33 crc kubenswrapper[4687]: I0131 06:43:33.984984 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:33Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.005015 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:34Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.015283 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:34Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.029394 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:34Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.037775 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.037812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.037820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.037834 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.037843 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.043061 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:34Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.053145 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:34Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.140188 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.140586 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.140716 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.140821 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.140913 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.243194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.243249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.243267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.243289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.243307 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.347033 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.347069 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.347080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.347096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.347107 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.449768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.449802 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.449810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.449824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.449833 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.552242 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.552293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.552310 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.552326 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.552337 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.557625 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:09:31.996824528 +0000 UTC Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.654996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.655048 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.655060 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.655079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.655098 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.692529 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.693094 4687 scope.go:117] "RemoveContainer" containerID="a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.757483 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.757542 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.757559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.757592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.757610 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.859466 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.859526 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.859549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.859584 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.859607 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.861046 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerStarted","Data":"aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0"} Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.961968 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.962004 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.962014 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.962029 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:34 crc kubenswrapper[4687]: I0131 06:43:34.962040 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:34Z","lastTransitionTime":"2026-01-31T06:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.067081 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.067130 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.067140 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.067154 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.067165 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.169628 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.169680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.169692 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.169710 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.169722 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.187643 4687 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.272956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.273036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.273047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.273064 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.273077 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.375985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.376040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.376051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.376065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.376075 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.478985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.479055 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.479071 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.479094 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.479111 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.558158 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:11:35.67071124 +0000 UTC Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.580932 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.580970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.580979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.580995 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.581004 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.602829 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:35 crc kubenswrapper[4687]: E0131 06:43:35.602957 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.602976 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.603033 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:35 crc kubenswrapper[4687]: E0131 06:43:35.603101 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:35 crc kubenswrapper[4687]: E0131 06:43:35.603157 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.617873 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.630942 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.641399 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.657907 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.676203 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.683077 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.683121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.683130 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.683144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.683153 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.690251 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.703707 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.724863 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.747509 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.766150 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.779402 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.785392 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.785458 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.785472 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.785492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.785506 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.796645 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.813731 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.831986 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.868859 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.871443 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.872013 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.879477 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.879522 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.879566 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.879586 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.889292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.889360 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.889377 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.889402 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.889438 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.890158 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.908393 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.922558 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.927385 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.939068 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.949797 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.962326 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.981397 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.991087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.991126 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.991175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.991191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.991229 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:35Z","lastTransitionTime":"2026-01-31T06:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:35 crc kubenswrapper[4687]: I0131 06:43:35.995359 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.008819 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.024905 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.047165 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.070747 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.086948 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.093630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.093685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.093698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.093716 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.094156 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.101528 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.113647 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.125577 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.136667 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.152662 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.166400 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.183938 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.195380 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.195957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.196002 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.196013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.196030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.196041 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.210579 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.227057 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.238019 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.247503 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.257661 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.266748 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.277230 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.288648 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.298224 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.298284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.298328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.298345 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.298378 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.400958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.401034 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.401052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.401074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.401129 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.503261 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.503327 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.503340 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.503361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.503376 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.559188 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:36:59.283873941 +0000 UTC Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.605916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.605955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.605966 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.605980 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.605991 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.708587 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.709031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.709181 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.709325 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.709541 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.812707 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.812935 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.812999 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.813059 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.813115 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.886709 4687 generic.go:334] "Generic (PLEG): container finished" podID="d57913d8-5742-4fd2-925b-6721231e7863" containerID="aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0" exitCode=0 Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.886813 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerDied","Data":"aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.909096 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.915031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.915068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.915076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.915090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.915099 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:36Z","lastTransitionTime":"2026-01-31T06:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.926766 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.947467 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.962147 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.978464 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:36 crc kubenswrapper[4687]: I0131 06:43:36.994395 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.009997 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.018290 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.018371 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.018386 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.018432 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.018451 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.024979 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.039273 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.053694 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.066348 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.083631 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.105599 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.115817 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.123594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.123641 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.123653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.123669 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.123683 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.226366 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.226425 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.226438 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.226453 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.226468 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.328988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.329035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.329046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.329062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.329074 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.430972 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.431010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.431020 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.431035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.431048 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.534324 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.534358 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.534367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.534380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.534391 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.559637 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:00:22.306394734 +0000 UTC Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.605153 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:37 crc kubenswrapper[4687]: E0131 06:43:37.605268 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.605331 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:37 crc kubenswrapper[4687]: E0131 06:43:37.605385 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.605451 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:37 crc kubenswrapper[4687]: E0131 06:43:37.605500 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.637257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.637286 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.637293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.637307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.637316 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.740380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.740445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.740459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.740474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.740485 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.843578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.843621 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.843631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.843649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.843661 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.894597 4687 generic.go:334] "Generic (PLEG): container finished" podID="d57913d8-5742-4fd2-925b-6721231e7863" containerID="f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4" exitCode=0 Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.894655 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerDied","Data":"f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.913134 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.926958 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.943986 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.945855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.945898 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.945911 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.945929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.945945 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:37Z","lastTransitionTime":"2026-01-31T06:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.959623 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.976179 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:37 crc kubenswrapper[4687]: I0131 06:43:37.991528 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.006769 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.019459 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.031269 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.045754 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.048944 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.048981 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.048991 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.049009 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.049022 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.059964 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.076122 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.095290 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.106793 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.151126 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.151178 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.151192 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.151212 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.151225 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.254166 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.254209 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.254221 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.254235 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.254247 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.357135 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.357175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.357185 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.357204 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.357214 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.459719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.459759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.459769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.459785 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.459796 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.560269 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:13:58.801515328 +0000 UTC Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.562339 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.562383 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.562395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.562442 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.562456 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.664869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.664917 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.664929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.664946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.664959 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.767838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.767874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.767884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.767899 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.767910 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.870101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.870140 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.870148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.870163 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.870172 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.902055 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" event={"ID":"d57913d8-5742-4fd2-925b-6721231e7863","Type":"ContainerStarted","Data":"38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.922022 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.946154 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.963616 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.973168 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.973222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.973235 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.973255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.973268 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:38Z","lastTransitionTime":"2026-01-31T06:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.981933 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:38 crc kubenswrapper[4687]: I0131 06:43:38.997181 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.011940 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.030330 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.044708 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.061509 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.075128 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.075590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.075660 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.075672 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.075698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.075711 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.089510 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.104282 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.115541 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.128897 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.177510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.177563 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.177576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.177591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.177603 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.280523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.280561 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.280572 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.280589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.280601 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.382951 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.382993 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.383003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.383017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.383027 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.404424 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.404504 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.404532 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.404641 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.404672 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:43:55.404652467 +0000 UTC m=+61.681912042 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.404691 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:55.404685207 +0000 UTC m=+61.681944782 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.404734 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.404816 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:55.40479905 +0000 UTC m=+61.682058625 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.485825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.485899 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.485912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.485929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.485942 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.505469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.505554 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.505662 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.505693 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.505706 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.505755 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:55.505739113 +0000 UTC m=+61.782998688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.505662 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.505828 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.505842 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.505885 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 06:43:55.505874077 +0000 UTC m=+61.783133652 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.560840 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:30:43.400973121 +0000 UTC Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.588944 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.588995 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.589007 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.589024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.589036 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.603338 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.603396 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.603352 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.603566 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.603726 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:39 crc kubenswrapper[4687]: E0131 06:43:39.603888 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.691305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.691344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.691369 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.691385 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.691397 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.794150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.794192 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.794203 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.794218 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.794228 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.896180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.896213 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.896223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.896239 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.896250 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.906329 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/0.log" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.909180 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7" exitCode=1 Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.909245 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7"} Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.909983 4687 scope.go:117] "RemoveContainer" containerID="590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.924534 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.944118 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.968696 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:39Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555600 6030 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555647 6030 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555680 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:43:39.555709 6030 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555714 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:43:39.555857 6030 factory.go:656] Stopping watch factory\\\\nI0131 06:43:39.555862 6030 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555921 6030 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555958 6030 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.556150 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:43:39.556183 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.981090 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.995001 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.998610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.998654 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.998664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.998679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:39 crc kubenswrapper[4687]: I0131 06:43:39.998690 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:39Z","lastTransitionTime":"2026-01-31T06:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.008065 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.021466 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.038328 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.054772 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.071280 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.083992 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.095356 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.100850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.100879 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.100891 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.100906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.100919 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.106926 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.118713 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.159593 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf"] Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.160106 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.162252 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.162490 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.176153 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.188692 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.203375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.203401 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.203433 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.203456 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.203471 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.204946 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.209757 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bffca17-c223-4bd0-b78c-a5b059413223-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.209806 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bffca17-c223-4bd0-b78c-a5b059413223-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.209828 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bffca17-c223-4bd0-b78c-a5b059413223-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.209864 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tddl4\" (UniqueName: \"kubernetes.io/projected/5bffca17-c223-4bd0-b78c-a5b059413223-kube-api-access-tddl4\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.219486 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.236576 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.248196 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.264451 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.282565 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.295355 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.305147 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.305183 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.305192 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.305207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.305215 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.310652 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tddl4\" (UniqueName: \"kubernetes.io/projected/5bffca17-c223-4bd0-b78c-a5b059413223-kube-api-access-tddl4\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.310702 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bffca17-c223-4bd0-b78c-a5b059413223-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.310736 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bffca17-c223-4bd0-b78c-a5b059413223-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.310752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bffca17-c223-4bd0-b78c-a5b059413223-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.310859 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.311354 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bffca17-c223-4bd0-b78c-a5b059413223-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.311403 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bffca17-c223-4bd0-b78c-a5b059413223-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.316828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bffca17-c223-4bd0-b78c-a5b059413223-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.324431 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.328467 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tddl4\" (UniqueName: \"kubernetes.io/projected/5bffca17-c223-4bd0-b78c-a5b059413223-kube-api-access-tddl4\") pod \"ovnkube-control-plane-749d76644c-ptfrf\" (UID: \"5bffca17-c223-4bd0-b78c-a5b059413223\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.340647 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.361165 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:39Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555600 6030 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555647 6030 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555680 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:43:39.555709 6030 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555714 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:43:39.555857 6030 factory.go:656] Stopping watch factory\\\\nI0131 06:43:39.555862 6030 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555921 6030 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555958 6030 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.556150 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:43:39.556183 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.374806 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.388524 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.407350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.407394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.407403 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.407436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.407445 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.477462 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" Jan 31 06:43:40 crc kubenswrapper[4687]: W0131 06:43:40.491372 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bffca17_c223_4bd0_b78c_a5b059413223.slice/crio-61377bacff4f5f6233632bae30ccc64375e7d3b1bbbe1715607894cb8fc29c87 WatchSource:0}: Error finding container 61377bacff4f5f6233632bae30ccc64375e7d3b1bbbe1715607894cb8fc29c87: Status 404 returned error can't find the container with id 61377bacff4f5f6233632bae30ccc64375e7d3b1bbbe1715607894cb8fc29c87 Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.509621 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.509668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.509678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.509701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.509713 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.561196 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:40:42.672114749 +0000 UTC Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.614971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.615053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.615067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.615085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.615098 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.690729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.690775 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.690787 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.690804 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.690816 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: E0131 06:43:40.702936 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.706299 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.706370 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.706383 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.706423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.706446 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: E0131 06:43:40.717773 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.721424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.721459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.721468 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.721482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.721492 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: E0131 06:43:40.735785 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.740554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.740598 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.740607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.740621 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.740631 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: E0131 06:43:40.753152 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.757855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.757886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.757894 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.757907 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.757916 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: E0131 06:43:40.770552 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: E0131 06:43:40.770703 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.773605 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.773639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.773665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.773682 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.773691 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.875859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.875907 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.875919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.875936 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.875959 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.913673 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" event={"ID":"5bffca17-c223-4bd0-b78c-a5b059413223","Type":"ContainerStarted","Data":"c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.913724 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" event={"ID":"5bffca17-c223-4bd0-b78c-a5b059413223","Type":"ContainerStarted","Data":"61377bacff4f5f6233632bae30ccc64375e7d3b1bbbe1715607894cb8fc29c87"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.915217 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/1.log" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.915666 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/0.log" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.918216 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d" exitCode=1 Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.918250 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.918279 4687 scope.go:117] "RemoveContainer" containerID="590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.919003 4687 scope.go:117] "RemoveContainer" containerID="449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d" Jan 31 06:43:40 crc kubenswrapper[4687]: E0131 06:43:40.919265 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.934493 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.945939 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.956292 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.966866 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.977881 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.977912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.977923 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.977939 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.977950 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:40Z","lastTransitionTime":"2026-01-31T06:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.978431 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:40 crc kubenswrapper[4687]: I0131 06:43:40.989630 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.001856 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.013642 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.026745 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.037887 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.047993 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.059319 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.075027 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.080489 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.080530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.080539 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.080553 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.080562 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.098640 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:39Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555600 6030 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555647 6030 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555680 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:43:39.555709 6030 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555714 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:43:39.555857 6030 factory.go:656] Stopping watch factory\\\\nI0131 06:43:39.555862 6030 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555921 6030 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555958 6030 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.556150 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:43:39.556183 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.112172 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.182667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.182702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.182759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.182775 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.182784 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.285403 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.285467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.285479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.285496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.285507 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.387897 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.387946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.387964 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.387982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.387997 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.490138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.490178 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.490187 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.490200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.490209 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.561508 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:11:29.995970733 +0000 UTC Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.592630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.592673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.592681 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.592695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.592707 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.603015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.603028 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:41 crc kubenswrapper[4687]: E0131 06:43:41.603179 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.603221 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:41 crc kubenswrapper[4687]: E0131 06:43:41.603285 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:41 crc kubenswrapper[4687]: E0131 06:43:41.603559 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.695227 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.695289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.695305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.695329 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.695347 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.799430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.799482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.799492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.799509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.799518 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.902373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.902439 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.902451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.902468 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.902480 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:41Z","lastTransitionTime":"2026-01-31T06:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.922628 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/1.log" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.927350 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" event={"ID":"5bffca17-c223-4bd0-b78c-a5b059413223","Type":"ContainerStarted","Data":"471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b"} Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.963556 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hbxj7"] Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.964455 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:41 crc kubenswrapper[4687]: E0131 06:43:41.964652 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.979556 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.990844 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:41 crc kubenswrapper[4687]: I0131 06:43:41.999853 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.004525 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.004576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.004587 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.004606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.004618 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.011763 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.023672 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.026319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.026522 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scf27\" (UniqueName: \"kubernetes.io/projected/dead0f10-2469-49a4-8d26-93fc90d6451d-kube-api-access-scf27\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.035001 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.046599 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.061127 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.079809 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:39Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555600 6030 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555647 6030 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555680 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:43:39.555709 6030 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555714 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:43:39.555857 6030 factory.go:656] Stopping watch factory\\\\nI0131 06:43:39.555862 6030 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555921 6030 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555958 6030 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.556150 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:43:39.556183 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.091451 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.105599 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.106332 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.106360 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.106370 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.106384 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.106394 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.116283 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.127355 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.127396 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scf27\" (UniqueName: \"kubernetes.io/projected/dead0f10-2469-49a4-8d26-93fc90d6451d-kube-api-access-scf27\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:42 crc kubenswrapper[4687]: E0131 06:43:42.127538 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:42 crc kubenswrapper[4687]: E0131 06:43:42.127589 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs podName:dead0f10-2469-49a4-8d26-93fc90d6451d nodeName:}" failed. No retries permitted until 2026-01-31 06:43:42.627576697 +0000 UTC m=+48.904836272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs") pod "network-metrics-daemon-hbxj7" (UID: "dead0f10-2469-49a4-8d26-93fc90d6451d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.127938 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.141558 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.146654 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scf27\" (UniqueName: \"kubernetes.io/projected/dead0f10-2469-49a4-8d26-93fc90d6451d-kube-api-access-scf27\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.153443 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.163425 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.208244 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.208288 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.208297 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.208312 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.208321 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.310104 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.310151 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.310161 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.310175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.310184 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.413022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.413068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.413080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.413097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.413120 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.514743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.514782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.514795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.514811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.514821 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.562213 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 09:54:40.268914741 +0000 UTC Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.617665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.617714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.617727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.617743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.617754 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.633373 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:42 crc kubenswrapper[4687]: E0131 06:43:42.633561 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:42 crc kubenswrapper[4687]: E0131 06:43:42.633629 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs podName:dead0f10-2469-49a4-8d26-93fc90d6451d nodeName:}" failed. No retries permitted until 2026-01-31 06:43:43.633608586 +0000 UTC m=+49.910868181 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs") pod "network-metrics-daemon-hbxj7" (UID: "dead0f10-2469-49a4-8d26-93fc90d6451d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.720153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.720195 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.720205 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.720220 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.720232 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.822191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.822278 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.822291 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.822308 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.822322 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.925022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.925062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.925072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.925085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.925095 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:42Z","lastTransitionTime":"2026-01-31T06:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.946169 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.957728 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.969220 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:42 crc kubenswrapper[4687]: I0131 06:43:42.982046 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.001540 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.022201 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.027996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.028030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.028040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.028053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.028061 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.045494 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.058972 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.068778 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.079324 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.090046 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.100658 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.113083 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.129547 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.131379 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.131411 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.131445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.131462 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.131475 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.150789 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:39Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555600 6030 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555647 6030 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555680 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:43:39.555709 6030 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555714 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:43:39.555857 6030 factory.go:656] Stopping watch factory\\\\nI0131 06:43:39.555862 6030 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555921 6030 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555958 6030 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.556150 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:43:39.556183 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.161801 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.234046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.234095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.234105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.234118 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.234126 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.337126 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.337267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.337280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.337296 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.337305 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.442822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.442874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.442886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.442905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.442916 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.545391 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.545452 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.545479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.545493 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.545503 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.562930 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:34:05.722257018 +0000 UTC Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.602939 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.603032 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:43 crc kubenswrapper[4687]: E0131 06:43:43.603069 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:43 crc kubenswrapper[4687]: E0131 06:43:43.603148 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.603202 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.603234 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:43 crc kubenswrapper[4687]: E0131 06:43:43.603289 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:43 crc kubenswrapper[4687]: E0131 06:43:43.603348 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.644338 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:43 crc kubenswrapper[4687]: E0131 06:43:43.644480 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:43 crc kubenswrapper[4687]: E0131 06:43:43.644547 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs podName:dead0f10-2469-49a4-8d26-93fc90d6451d nodeName:}" failed. No retries permitted until 2026-01-31 06:43:45.644529986 +0000 UTC m=+51.921789561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs") pod "network-metrics-daemon-hbxj7" (UID: "dead0f10-2469-49a4-8d26-93fc90d6451d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.647768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.647809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.647821 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.647836 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.647846 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.750356 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.750697 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.750797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.750892 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.750971 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.853079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.853119 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.853130 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.853147 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.853171 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.956086 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.956147 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.956159 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.956175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:43 crc kubenswrapper[4687]: I0131 06:43:43.956187 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:43Z","lastTransitionTime":"2026-01-31T06:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.058990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.059222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.059233 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.059250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.059261 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.161205 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.161327 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.161338 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.161352 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.161361 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.264140 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.264200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.264211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.264236 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.264248 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.367500 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.367574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.367592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.367617 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.367635 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.470790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.470825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.470835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.470849 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.470861 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.564210 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:30:13.654741217 +0000 UTC Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.572961 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.573003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.573011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.573024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.573033 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.675790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.675846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.675861 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.675883 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.675898 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.778888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.778940 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.778955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.778975 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.778989 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.881085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.881116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.881125 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.881137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.881148 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.984271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.984330 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.984343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.984361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:44 crc kubenswrapper[4687]: I0131 06:43:44.984374 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:44Z","lastTransitionTime":"2026-01-31T06:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.086268 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.086327 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.086339 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.086354 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.086387 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.189523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.189594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.189606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.189644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.189659 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.292275 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.292330 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.292344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.292364 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.292377 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.395594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.395656 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.395668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.395686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.395697 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.498160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.498207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.498219 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.498236 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.498247 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.565289 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:27:28.015857637 +0000 UTC Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.600444 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.600496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.600510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.600527 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.600539 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.602692 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.602735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.602765 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:45 crc kubenswrapper[4687]: E0131 06:43:45.602848 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.602900 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:45 crc kubenswrapper[4687]: E0131 06:43:45.602997 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:45 crc kubenswrapper[4687]: E0131 06:43:45.603102 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:45 crc kubenswrapper[4687]: E0131 06:43:45.603180 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.617671 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.634176 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.653932 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://590561ec0ea313a737f57d7b4df090ec66eb2e2257480b8f009c9035717ab9b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:39Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555600 6030 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555647 6030 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555680 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:43:39.555709 6030 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 06:43:39.555714 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:43:39.555857 6030 factory.go:656] Stopping watch factory\\\\nI0131 06:43:39.555862 6030 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555921 6030 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.555958 6030 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 06:43:39.556150 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:43:39.556183 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.666022 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.667688 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:45 crc kubenswrapper[4687]: E0131 06:43:45.667960 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:45 crc kubenswrapper[4687]: E0131 06:43:45.668062 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs podName:dead0f10-2469-49a4-8d26-93fc90d6451d nodeName:}" failed. No retries permitted until 2026-01-31 06:43:49.668037136 +0000 UTC m=+55.945296711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs") pod "network-metrics-daemon-hbxj7" (UID: "dead0f10-2469-49a4-8d26-93fc90d6451d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.679849 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.694127 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.702284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.702325 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.702334 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.702348 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.702359 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.707189 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.717150 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.732497 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.748040 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.762241 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.773622 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.784884 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.797059 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.803998 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.804049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.804057 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.804070 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.804078 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.811084 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.823232 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:45Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.906217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.906272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.906285 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.906303 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:45 crc kubenswrapper[4687]: I0131 06:43:45.906315 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:45Z","lastTransitionTime":"2026-01-31T06:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.008098 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.008144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.008156 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.008173 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.008188 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.111614 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.111661 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.111670 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.111688 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.111699 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.213738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.213776 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.213784 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.213798 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.213810 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.316045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.316082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.316091 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.316106 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.316115 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.417987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.418042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.418054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.418069 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.418080 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.519721 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.519767 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.519780 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.519796 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.519809 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.565564 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:34:40.628498207 +0000 UTC Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.622333 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.622406 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.622437 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.622450 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.622459 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.724846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.724932 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.724946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.724970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.724987 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.827073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.827118 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.827131 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.827152 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.827165 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.930160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.930215 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.930239 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.930258 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:46 crc kubenswrapper[4687]: I0131 06:43:46.930271 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:46Z","lastTransitionTime":"2026-01-31T06:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.032500 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.032548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.032559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.032575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.032587 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.135893 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.135954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.135970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.135988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.136000 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.237979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.238015 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.238023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.238036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.238046 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.340160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.340215 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.340224 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.340237 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.340245 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.442355 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.442443 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.442457 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.442484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.442497 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.545045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.545133 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.545148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.545168 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.545187 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.566725 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:50:21.313066811 +0000 UTC Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.603384 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.603476 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.603453 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:47 crc kubenswrapper[4687]: E0131 06:43:47.603559 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.603431 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:47 crc kubenswrapper[4687]: E0131 06:43:47.603682 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:47 crc kubenswrapper[4687]: E0131 06:43:47.603746 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:47 crc kubenswrapper[4687]: E0131 06:43:47.603848 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.647542 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.647580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.647590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.647602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.647612 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.750638 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.750714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.750726 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.750752 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.750766 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.854205 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.854264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.854277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.854298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.854311 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.956559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.956604 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.956616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.956632 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:47 crc kubenswrapper[4687]: I0131 06:43:47.956644 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:47Z","lastTransitionTime":"2026-01-31T06:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.059356 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.059436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.059449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.059467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.059480 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.162330 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.163042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.163079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.163100 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.163114 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.266664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.266708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.266718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.266737 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.266749 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.369430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.369722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.369958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.370145 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.370213 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.472637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.472678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.472689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.472703 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.472718 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.567478 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:47:35.396008154 +0000 UTC Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.576309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.576535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.576641 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.576754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.576903 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.680599 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.680650 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.680659 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.680675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.680687 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.783135 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.783170 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.783180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.783194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.783205 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.886188 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.886254 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.886276 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.886311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.886336 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.989504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.989549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.989561 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.989581 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:48 crc kubenswrapper[4687]: I0131 06:43:48.989596 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:48Z","lastTransitionTime":"2026-01-31T06:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.091150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.091184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.091193 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.091207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.091218 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.193556 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.193605 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.193615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.193631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.193645 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.296497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.296535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.296544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.296559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.296569 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.399065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.399377 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.399536 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.399713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.399836 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.503030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.503073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.503085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.503102 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.503114 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.568481 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:02:04.702461924 +0000 UTC Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.602886 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.602949 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:49 crc kubenswrapper[4687]: E0131 06:43:49.603014 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.603093 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:49 crc kubenswrapper[4687]: E0131 06:43:49.603175 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.603228 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:49 crc kubenswrapper[4687]: E0131 06:43:49.603289 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:49 crc kubenswrapper[4687]: E0131 06:43:49.603343 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.604711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.604740 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.604748 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.604760 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.604769 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.708029 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.708103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.708115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.708132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.708145 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.710909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:49 crc kubenswrapper[4687]: E0131 06:43:49.711256 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:49 crc kubenswrapper[4687]: E0131 06:43:49.711372 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs podName:dead0f10-2469-49a4-8d26-93fc90d6451d nodeName:}" failed. No retries permitted until 2026-01-31 06:43:57.711340078 +0000 UTC m=+63.988599683 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs") pod "network-metrics-daemon-hbxj7" (UID: "dead0f10-2469-49a4-8d26-93fc90d6451d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.810368 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.810426 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.810437 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.810450 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.810460 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.913173 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.913222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.913234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.913271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:49 crc kubenswrapper[4687]: I0131 06:43:49.913285 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:49Z","lastTransitionTime":"2026-01-31T06:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.016110 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.016157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.016166 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.016181 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.016191 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.117806 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.117851 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.117861 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.117874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.117883 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.220118 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.220162 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.220170 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.220183 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.220194 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.322762 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.322813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.322826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.322842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.322853 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.425641 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.425702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.425713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.425732 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.425746 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.529122 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.529188 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.529201 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.529223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.529236 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.569233 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:21:22.667700189 +0000 UTC Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.632016 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.632105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.632116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.632139 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.632152 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.735389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.735557 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.735575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.735600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.735619 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.838774 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.838814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.838823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.838838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.838848 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.941463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.941510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.941522 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.941535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:50 crc kubenswrapper[4687]: I0131 06:43:50.941554 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:50Z","lastTransitionTime":"2026-01-31T06:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.029745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.029777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.029787 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.029807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.029818 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.040698 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.044581 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.044616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.044624 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.044637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.044646 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.056379 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.059707 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.059744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.059754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.059766 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.059775 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.074552 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.078214 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.078257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.078268 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.078288 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.078302 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.090521 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.094076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.094122 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.094135 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.094152 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.094163 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.106423 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.106680 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.111576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.111608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.111618 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.111632 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.111643 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.214034 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.214071 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.214081 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.214097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.214111 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.317326 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.317369 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.317381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.317399 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.317441 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.419572 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.419620 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.419631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.419647 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.419658 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.522645 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.522691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.522701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.522715 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.522726 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.570242 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:35:54.457263157 +0000 UTC Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.602904 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.602936 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.602960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.603059 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.603118 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.603263 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.603464 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:51 crc kubenswrapper[4687]: E0131 06:43:51.603599 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.625450 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.625480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.625497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.625511 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.625521 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.728854 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.728906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.728922 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.728945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.728961 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.831293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.831328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.831336 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.831351 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.831361 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.933567 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.933618 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.933629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.933649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:51 crc kubenswrapper[4687]: I0131 06:43:51.933661 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:51Z","lastTransitionTime":"2026-01-31T06:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.037997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.038059 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.038074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.038094 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.038109 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.141018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.141070 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.141087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.141103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.141113 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.243316 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.243360 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.243370 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.243384 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.243394 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.345446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.345495 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.345507 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.345524 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.345536 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.447344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.447402 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.447432 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.447447 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.447458 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.550209 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.550250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.550262 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.550280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.550293 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.570743 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:13:30.917520123 +0000 UTC Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.651732 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.651771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.651782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.651799 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.651812 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.754063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.754099 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.754107 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.754121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.754130 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.856693 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.856733 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.856743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.856755 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.856763 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.958849 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.958882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.958890 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.958904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:52 crc kubenswrapper[4687]: I0131 06:43:52.958913 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:52Z","lastTransitionTime":"2026-01-31T06:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.061390 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.061451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.061465 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.061483 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.061496 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.164079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.164121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.164132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.164148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.164163 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.266992 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.267031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.267049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.267065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.267077 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.369838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.369896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.369914 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.369933 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.369949 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.472988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.473039 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.473053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.473073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.473087 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.570987 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:30:34.012955843 +0000 UTC Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.575741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.575790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.575800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.575813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.575824 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.602854 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.602911 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.602873 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:53 crc kubenswrapper[4687]: E0131 06:43:53.602999 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:53 crc kubenswrapper[4687]: E0131 06:43:53.603123 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:53 crc kubenswrapper[4687]: E0131 06:43:53.603187 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.603240 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:53 crc kubenswrapper[4687]: E0131 06:43:53.603302 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.678196 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.678237 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.678248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.678262 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.678273 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.780918 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.780965 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.780973 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.780989 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.780998 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.883870 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.883923 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.883938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.883955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.883967 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.986528 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.986583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.986597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.986613 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:53 crc kubenswrapper[4687]: I0131 06:43:53.986625 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:53Z","lastTransitionTime":"2026-01-31T06:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.089046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.089081 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.089089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.089102 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.089112 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.190518 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.190551 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.190559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.190572 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.190764 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.292996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.293038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.293050 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.293065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.293076 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.395999 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.396045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.396055 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.396071 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.396081 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.498089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.498259 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.498280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.498296 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.498305 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.572042 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 22:56:37.504900046 +0000 UTC Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.601381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.601445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.601458 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.601475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.601489 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.603558 4687 scope.go:117] "RemoveContainer" containerID="449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.610215 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.619938 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.636024 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.653083 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.665125 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.681723 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.695567 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.703493 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.703524 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.703533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.703547 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.703558 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.709280 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.720820 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.733454 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.747258 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.762635 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.776071 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.789099 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.803457 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.805944 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.805971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.805983 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.805996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.806008 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.814782 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.825294 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.839495 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.852661 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.864768 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.875991 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.890080 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.907641 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.909502 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.909536 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.909548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.909565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.909578 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:54Z","lastTransitionTime":"2026-01-31T06:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.925679 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.939499 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.951757 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.964632 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.966697 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/1.log" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.968980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1"} Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.969469 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.986291 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:54 crc kubenswrapper[4687]: I0131 06:43:54.997699 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:54Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.011141 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.012077 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.012115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.012125 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.012141 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.012151 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.024252 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.039451 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.047889 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.059392 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.071432 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.082062 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.093991 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.106727 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.114184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.114231 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.114243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.114260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.114274 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.118574 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.129333 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.141070 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.154784 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.171811 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.181268 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.191706 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.203198 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.214609 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.216049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.216094 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.216108 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.216124 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.216134 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.226239 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.239747 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.318015 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.318052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.318062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.318075 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.318085 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.420311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.420350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.420363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.420381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.420393 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.468012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.468130 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.468162 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.468251 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.468297 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:44:27.468284889 +0000 UTC m=+93.745544464 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.468479 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:44:27.468466453 +0000 UTC m=+93.745726028 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.468571 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.468618 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:44:27.468609027 +0000 UTC m=+93.745868602 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.522550 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.522584 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.522594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.522608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.522616 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.569126 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.569224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.569383 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.569446 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.569466 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.569544 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 06:44:27.56952205 +0000 UTC m=+93.846781635 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.569626 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.569659 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.569678 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.569771 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 06:44:27.569741705 +0000 UTC m=+93.847001480 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.572845 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:48:45.262003216 +0000 UTC Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.602749 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.602749 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.602939 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.602783 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.602749 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.603045 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.603058 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.603107 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.614842 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.628039 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.628074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.628086 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.628103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.628116 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.628535 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.644210 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.666877 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.679631 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.693567 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.708950 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.722267 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.730462 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.730719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.730835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.730956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.731044 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.739781 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.752027 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.769968 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.786138 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.798479 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.809392 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.821051 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.831218 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.832893 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.832921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.832952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.832968 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.832978 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.934692 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.934772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.934794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.934824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.934845 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:55Z","lastTransitionTime":"2026-01-31T06:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.975105 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/2.log" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.975696 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/1.log" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.978867 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1" exitCode=1 Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.978918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1"} Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.978956 4687 scope.go:117] "RemoveContainer" containerID="449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.979625 4687 scope.go:117] "RemoveContainer" containerID="9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1" Jan 31 06:43:55 crc kubenswrapper[4687]: E0131 06:43:55.979762 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" Jan 31 06:43:55 crc kubenswrapper[4687]: I0131 06:43:55.996304 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:55Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.018591 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.030067 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.038468 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.038528 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.038540 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.038558 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.038570 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.047816 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.065898 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.084044 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.097940 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.117563 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.131834 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.141258 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.141290 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.141299 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.141315 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.141915 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.146278 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.158042 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.171955 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.195108 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.215775 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.235699 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.244311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.244357 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.244368 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.244398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.244424 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.250535 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.346595 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.346653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.346665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.346683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.346693 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.449663 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.449704 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.449713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.449726 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.449734 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.551822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.551853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.551862 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.551875 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.551884 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.573324 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:34:24.942049327 +0000 UTC Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.654098 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.654137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.654145 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.654159 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.654167 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.756496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.756535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.756544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.756557 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.756567 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.790449 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.801153 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.803686 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.820070 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.845370 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://449d257b54ac27699001edd746aee248d3d71c33fede56a47c10ccde236f0d2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"message\\\":\\\"nshift-multus/multus-additional-cni-plugins-jlk4z after 0 failed attempt(s)\\\\nI0131 06:43:40.648883 6241 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 06:43:40.648884 6241 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 2.720146ms\\\\nI0131 06:43:40.648854 6241 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bfpqq\\\\nF0131 06:43:40.648882 6241 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:40Z is after 2025-08-24T17:21:41Z]\\\\nI0131 06:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.859228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.859309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.859333 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.859368 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.859395 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.860289 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.875445 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.886938 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.898941 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.911028 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.921835 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.938164 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.951549 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.962075 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.962118 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.962132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.962149 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.962160 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:56Z","lastTransitionTime":"2026-01-31T06:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.964545 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.976748 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.984299 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/2.log" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.988516 4687 scope.go:117] "RemoveContainer" containerID="9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1" Jan 31 06:43:56 crc kubenswrapper[4687]: E0131 06:43:56.988690 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" Jan 31 06:43:56 crc kubenswrapper[4687]: I0131 06:43:56.989963 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:56Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.005761 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.023932 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.039117 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.053152 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.065088 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.065120 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.065132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.065146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.065155 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.069447 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.085482 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.111903 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.126123 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.145223 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.162269 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.167445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.167514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.167528 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.167554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.167578 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.178541 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.195060 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.213058 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.227391 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.239124 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.252345 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.263655 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.269585 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.269621 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.269632 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.269648 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.269674 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.274146 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.289924 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.372396 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.372472 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.372484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.372501 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.372516 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.474777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.475076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.475153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.475230 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.475292 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.573552 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 09:44:22.575136943 +0000 UTC Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.577873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.577947 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.577965 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.577989 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.578007 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.602996 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.603053 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:57 crc kubenswrapper[4687]: E0131 06:43:57.603138 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.603173 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:57 crc kubenswrapper[4687]: E0131 06:43:57.603290 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:57 crc kubenswrapper[4687]: E0131 06:43:57.603354 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.603438 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:57 crc kubenswrapper[4687]: E0131 06:43:57.603488 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.680093 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.680127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.680135 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.680148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.680156 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.782221 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.782553 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.782648 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.782739 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.782816 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.797028 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:57 crc kubenswrapper[4687]: E0131 06:43:57.797165 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:57 crc kubenswrapper[4687]: E0131 06:43:57.797220 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs podName:dead0f10-2469-49a4-8d26-93fc90d6451d nodeName:}" failed. No retries permitted until 2026-01-31 06:44:13.797203781 +0000 UTC m=+80.074463366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs") pod "network-metrics-daemon-hbxj7" (UID: "dead0f10-2469-49a4-8d26-93fc90d6451d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.885919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.886027 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.886043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.886060 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.886070 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.988875 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.988919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.988929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.988947 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:57 crc kubenswrapper[4687]: I0131 06:43:57.988959 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:57Z","lastTransitionTime":"2026-01-31T06:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.091183 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.091222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.091232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.091248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.091259 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.194673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.194712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.194728 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.194742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.194751 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.297066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.297117 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.297133 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.297156 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.297172 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.401845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.401913 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.401930 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.401978 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.402004 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.504506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.504558 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.504579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.504603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.504620 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.574511 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:39:11.512179817 +0000 UTC Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.607477 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.607513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.607524 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.607538 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.607549 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.709687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.709712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.709721 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.709735 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.709745 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.812877 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.812912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.812920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.812936 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.812949 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.916258 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.916306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.916320 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.916340 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:58 crc kubenswrapper[4687]: I0131 06:43:58.916352 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:58Z","lastTransitionTime":"2026-01-31T06:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.018646 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.019030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.019097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.019180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.019245 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.122514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.122856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.122868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.122885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.122898 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.225488 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.225533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.225549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.225565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.225576 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.328910 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.328972 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.328992 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.329015 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.329031 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.431589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.431639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.431657 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.431675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.431686 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.535013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.535080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.535107 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.535138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.535156 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.577384 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:41:50.659492473 +0000 UTC Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.602780 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.602864 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.602958 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:43:59 crc kubenswrapper[4687]: E0131 06:43:59.603076 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.603099 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:43:59 crc kubenswrapper[4687]: E0131 06:43:59.603176 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:43:59 crc kubenswrapper[4687]: E0131 06:43:59.603191 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:43:59 crc kubenswrapper[4687]: E0131 06:43:59.603370 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.638186 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.638231 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.638243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.638260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.638276 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.740808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.740847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.740858 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.740874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.740892 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.842777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.842809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.842818 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.842833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.842843 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.945317 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.945657 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.945736 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.945820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:43:59 crc kubenswrapper[4687]: I0131 06:43:59.945890 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:43:59Z","lastTransitionTime":"2026-01-31T06:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.047886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.047925 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.047934 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.047949 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.047958 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.150100 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.150136 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.150146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.150160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.150171 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.252790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.252828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.252838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.252853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.252864 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.356375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.356683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.356781 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.356861 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.356927 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.459881 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.459975 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.459997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.460043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.460099 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.563331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.563367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.563375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.563390 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.563399 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.577614 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:03:16.224190677 +0000 UTC Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.665955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.665991 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.666003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.666023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.666035 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.768722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.768758 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.768765 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.768778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.768786 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.871972 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.872052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.872074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.872104 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.872126 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.974777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.974809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.974819 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.974831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:00 crc kubenswrapper[4687]: I0131 06:44:00.974840 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:00Z","lastTransitionTime":"2026-01-31T06:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.077863 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.077900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.077910 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.077924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.077933 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.176731 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.176780 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.176792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.176808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.176823 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.195385 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.199247 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.199381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.199519 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.199589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.199649 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.212057 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.216305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.216366 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.216376 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.216392 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.216402 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.233034 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.237028 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.237210 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.237366 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.237581 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.237713 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.251504 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.254810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.254864 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.254882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.254905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.254922 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.267715 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.268065 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.269639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.269681 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.269694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.269711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.269723 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.372249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.372278 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.372287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.372299 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.372307 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.474446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.474488 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.474498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.474512 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.474524 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.576364 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.576424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.576441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.576458 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.576470 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.578546 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:06:24.001605056 +0000 UTC Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.602997 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.603087 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.603113 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.603155 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.603253 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.603325 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.603528 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:01 crc kubenswrapper[4687]: E0131 06:44:01.603703 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.678770 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.678813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.678825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.678841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.678852 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.780742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.780781 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.780793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.780811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.780823 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.883459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.883511 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.883526 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.883546 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.883559 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.985336 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.985810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.985916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.986012 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:01 crc kubenswrapper[4687]: I0131 06:44:01.986171 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:01Z","lastTransitionTime":"2026-01-31T06:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.089160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.089225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.089237 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.089255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.089267 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.191402 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.191464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.191475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.191493 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.191505 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.294661 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.294705 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.294714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.294729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.294739 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.396856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.396895 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.396906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.396921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.396933 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.499260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.499314 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.499328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.499347 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.499360 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.579305 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:48:01.760429412 +0000 UTC Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.601507 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.601580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.601594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.601622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.601638 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.704058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.704100 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.704111 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.704128 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.704139 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.805790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.805821 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.805829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.805841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.805850 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.908125 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.908355 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.908494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.908597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:02 crc kubenswrapper[4687]: I0131 06:44:02.908688 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:02Z","lastTransitionTime":"2026-01-31T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.011451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.011499 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.011514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.011529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.011539 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.114019 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.114089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.114113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.114144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.114166 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.217126 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.217157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.217169 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.217184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.217195 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.319653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.319689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.319700 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.319714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.319725 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.421952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.421996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.422007 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.422021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.422030 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.525207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.525255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.525292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.525308 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.525320 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.580443 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:54:31.929961654 +0000 UTC Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.602791 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.602822 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.602813 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.602795 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:03 crc kubenswrapper[4687]: E0131 06:44:03.602931 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:03 crc kubenswrapper[4687]: E0131 06:44:03.603010 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:03 crc kubenswrapper[4687]: E0131 06:44:03.603094 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:03 crc kubenswrapper[4687]: E0131 06:44:03.603144 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.743353 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.743423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.743440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.743458 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.743476 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.846430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.846478 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.846489 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.846507 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.846522 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.948966 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.949005 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.949012 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.949024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:03 crc kubenswrapper[4687]: I0131 06:44:03.949033 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:03Z","lastTransitionTime":"2026-01-31T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.050963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.051006 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.051014 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.051029 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.051038 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.153603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.153709 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.153722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.153742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.153754 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.256751 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.256795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.256817 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.256834 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.256846 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.359626 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.359663 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.359675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.359691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.359702 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.462005 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.462045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.462056 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.462073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.462085 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.564092 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.564125 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.564133 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.564148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.564158 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.580972 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:41:57.308988899 +0000 UTC Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.666472 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.666728 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.666799 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.666885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.666942 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.770455 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.770503 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.770515 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.770533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.770545 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.873310 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.873355 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.873367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.873386 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.873401 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.975309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.975363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.975372 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.975389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:04 crc kubenswrapper[4687]: I0131 06:44:04.975400 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:04Z","lastTransitionTime":"2026-01-31T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.078003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.078798 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.078836 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.078862 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.078878 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.181148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.181189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.181200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.181216 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.181226 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.283641 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.283676 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.283685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.283698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.283707 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.386247 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.386310 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.386323 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.386341 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.386354 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.488394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.488475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.488488 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.488502 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.488511 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.581954 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:44:18.410093153 +0000 UTC Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.591209 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.591249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.591263 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.591281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.591292 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.602671 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.602728 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.602811 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.602686 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:05 crc kubenswrapper[4687]: E0131 06:44:05.602850 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:05 crc kubenswrapper[4687]: E0131 06:44:05.602936 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:05 crc kubenswrapper[4687]: E0131 06:44:05.603024 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:05 crc kubenswrapper[4687]: E0131 06:44:05.603051 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.617230 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.631797 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.653022 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.663122 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.676717 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.687579 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.693742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.693778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.693789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.693804 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.693815 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.699056 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.710538 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.722224 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.731775 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.743489 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.756710 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.767203 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.778434 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.788940 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.796201 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.796232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.796246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.796267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.796279 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.800292 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.809782 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.897871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.897912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.897926 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.897942 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:05 crc kubenswrapper[4687]: I0131 06:44:05.897953 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:05Z","lastTransitionTime":"2026-01-31T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.000264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.000298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.000309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.000325 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.000339 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.103160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.103191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.103200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.103214 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.103223 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.205181 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.205221 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.205231 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.205244 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.205254 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.307560 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.307607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.307618 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.307636 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.307648 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.410432 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.410470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.410479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.410492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.410503 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.512133 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.512168 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.512181 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.512198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.512209 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.582797 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:24:01.44674955 +0000 UTC Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.615095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.615129 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.615139 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.615151 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.615160 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.717396 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.717459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.717470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.717484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.717495 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.820585 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.820648 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.820658 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.820673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.820694 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.923066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.923115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.923127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.923144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:06 crc kubenswrapper[4687]: I0131 06:44:06.923157 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:06Z","lastTransitionTime":"2026-01-31T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.025539 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.025570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.025582 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.025597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.025608 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.128179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.128239 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.128252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.128267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.128278 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.230509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.230550 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.230563 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.230580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.230593 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.333250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.333294 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.333306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.333323 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.333333 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.435701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.435741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.435749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.435764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.435774 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.538015 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.538063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.538080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.538103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.538116 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.583261 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:33:27.071282337 +0000 UTC Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.603328 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.603336 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.603358 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.603475 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:07 crc kubenswrapper[4687]: E0131 06:44:07.603513 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:07 crc kubenswrapper[4687]: E0131 06:44:07.603633 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:07 crc kubenswrapper[4687]: E0131 06:44:07.603695 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:07 crc kubenswrapper[4687]: E0131 06:44:07.603740 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.640334 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.640369 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.640381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.640398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.640426 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.742479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.742522 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.742533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.742549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.742561 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.844840 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.844893 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.844902 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.844914 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.844924 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.946922 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.947194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.947204 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.947218 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:07 crc kubenswrapper[4687]: I0131 06:44:07.947226 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:07Z","lastTransitionTime":"2026-01-31T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.048976 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.049014 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.049022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.049035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.049044 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.158331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.158382 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.158397 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.158441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.158457 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.260987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.261024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.261038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.261054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.261065 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.363988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.364037 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.364051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.364068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.364080 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.466142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.466186 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.466196 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.466213 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.466225 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.568311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.568352 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.568361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.568375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.568385 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.583429 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:38:19.358021163 +0000 UTC Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.670800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.670843 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.670853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.670868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.670878 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.772883 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.772916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.772924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.772937 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.772946 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.875320 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.875361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.875370 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.875385 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.875395 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.977708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.977748 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.977757 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.977771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:08 crc kubenswrapper[4687]: I0131 06:44:08.977780 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:08Z","lastTransitionTime":"2026-01-31T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.080591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.080913 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.081010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.081096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.081173 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.183990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.184049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.184059 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.184088 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.184102 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.287785 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.287837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.287864 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.287882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.287896 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.391569 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.391926 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.392018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.392101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.392176 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.495344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.495398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.495426 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.495458 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.495471 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.584026 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:27:47.085300835 +0000 UTC Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.597898 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.598250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.598328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.598401 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.598492 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.603342 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.603369 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.603461 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:09 crc kubenswrapper[4687]: E0131 06:44:09.603460 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.603391 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:09 crc kubenswrapper[4687]: E0131 06:44:09.603579 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:09 crc kubenswrapper[4687]: E0131 06:44:09.603670 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:09 crc kubenswrapper[4687]: E0131 06:44:09.603705 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.604324 4687 scope.go:117] "RemoveContainer" containerID="9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1" Jan 31 06:44:09 crc kubenswrapper[4687]: E0131 06:44:09.604480 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.701566 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.701608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.701620 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.701637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.701649 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.804424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.804467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.804476 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.804489 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.804498 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.906660 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.906704 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.906716 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.906731 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:09 crc kubenswrapper[4687]: I0131 06:44:09.906741 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:09Z","lastTransitionTime":"2026-01-31T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.008745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.008791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.008799 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.008813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.008822 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.110861 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.110903 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.110913 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.110929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.110941 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.212764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.212800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.212811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.212825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.212834 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.315498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.315541 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.315553 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.315570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.315581 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.417888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.417934 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.417953 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.417970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.417989 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.520149 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.520198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.520215 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.520231 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.520242 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.586063 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:16:04.516460274 +0000 UTC Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.622497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.622548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.622564 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.622583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.622594 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.725579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.725622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.725665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.725684 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.725696 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.828089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.828139 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.828151 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.828168 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.828181 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.930814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.930860 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.930868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.930882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:10 crc kubenswrapper[4687]: I0131 06:44:10.930895 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:10Z","lastTransitionTime":"2026-01-31T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.032219 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.032260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.032271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.032284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.032294 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.135078 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.135130 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.135142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.135157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.135171 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.237333 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.237424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.237439 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.237454 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.237465 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.340859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.340993 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.341013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.341036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.341085 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.401892 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.401967 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.401980 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.401999 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.402011 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.420314 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.424387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.424449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.424459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.424475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.424485 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.436989 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.440487 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.440538 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.440546 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.440560 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.440569 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.454120 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.457546 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.457576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.457585 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.457601 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.457613 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.474046 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.478090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.478127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.478139 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.478155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.478167 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.494771 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.494934 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.496598 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.496649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.496665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.496687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.496703 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.587079 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:31:42.566882163 +0000 UTC Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.598771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.598800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.598808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.598824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.598834 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.603060 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.603106 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.603142 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.603198 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.603284 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.603357 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.603536 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:11 crc kubenswrapper[4687]: E0131 06:44:11.603534 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.701086 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.701122 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.701132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.701148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.701157 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.804009 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.804053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.804062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.804076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.804084 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.906632 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.906680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.906694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.906710 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:11 crc kubenswrapper[4687]: I0131 06:44:11.906722 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:11Z","lastTransitionTime":"2026-01-31T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.009314 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.009363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.009374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.009435 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.009449 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.112322 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.112363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.112374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.112392 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.112428 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.214020 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.214056 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.214069 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.214090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.214102 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.316484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.316553 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.316568 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.316587 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.316600 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.419992 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.420054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.420077 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.420105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.420127 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.522561 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.522611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.522623 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.522639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.522652 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.587571 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:44:45.720373858 +0000 UTC Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.624763 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.624793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.624808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.624825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.624837 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.727880 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.727946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.727956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.727969 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.727978 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.831040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.831074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.831084 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.831129 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.831141 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.935032 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.935113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.935146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.935165 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:12 crc kubenswrapper[4687]: I0131 06:44:12.935176 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:12Z","lastTransitionTime":"2026-01-31T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.037504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.037552 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.037564 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.037580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.037591 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.140654 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.140698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.140706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.140726 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.140738 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.244637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.244687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.244696 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.244709 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.244718 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.347255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.347312 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.347322 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.347337 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.347345 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.450045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.450085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.450096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.450112 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.450123 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.552308 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.552446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.552465 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.552480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.552491 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.587688 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:27:11.863151555 +0000 UTC Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.603034 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.603106 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:13 crc kubenswrapper[4687]: E0131 06:44:13.603148 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:13 crc kubenswrapper[4687]: E0131 06:44:13.603242 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.603290 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.603307 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:13 crc kubenswrapper[4687]: E0131 06:44:13.603363 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:13 crc kubenswrapper[4687]: E0131 06:44:13.603492 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.656089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.656142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.656159 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.656180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.656198 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.758202 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.758242 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.758252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.758266 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.758276 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.851433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:13 crc kubenswrapper[4687]: E0131 06:44:13.851593 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:44:13 crc kubenswrapper[4687]: E0131 06:44:13.851658 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs podName:dead0f10-2469-49a4-8d26-93fc90d6451d nodeName:}" failed. No retries permitted until 2026-01-31 06:44:45.851641998 +0000 UTC m=+112.128901573 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs") pod "network-metrics-daemon-hbxj7" (UID: "dead0f10-2469-49a4-8d26-93fc90d6451d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.860541 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.860884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.860900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.860922 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.860938 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.963753 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.963807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.963825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.963847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:13 crc kubenswrapper[4687]: I0131 06:44:13.963863 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:13Z","lastTransitionTime":"2026-01-31T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.066850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.066909 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.066924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.066946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.066962 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.169706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.169759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.169771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.169791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.169800 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.272338 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.272376 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.272388 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.272453 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.272468 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.375228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.375269 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.375277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.375291 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.375299 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.477537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.477752 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.477760 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.477774 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.477783 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.580599 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.580644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.580652 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.580696 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.580708 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.587994 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:32:02.539602458 +0000 UTC Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.682764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.682826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.682837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.682855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.682869 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.784828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.784914 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.784925 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.784944 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.784956 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.888090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.888129 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.888140 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.888155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.888171 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.990756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.990811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.990822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.990839 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:14 crc kubenswrapper[4687]: I0131 06:44:14.990853 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:14Z","lastTransitionTime":"2026-01-31T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.093324 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.093370 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.093380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.093397 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.093441 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.195386 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.195452 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.195463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.195478 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.195488 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.297711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.297777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.297792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.297833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.297845 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.401429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.401477 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.401490 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.401507 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.401522 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.503705 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.503747 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.503757 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.503769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.503777 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.588815 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:55:38.998797368 +0000 UTC Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.603205 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.603240 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.603225 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:15 crc kubenswrapper[4687]: E0131 06:44:15.603371 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.603503 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:15 crc kubenswrapper[4687]: E0131 06:44:15.603580 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:15 crc kubenswrapper[4687]: E0131 06:44:15.603650 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:15 crc kubenswrapper[4687]: E0131 06:44:15.603953 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.605355 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.605699 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.605711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.605727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.605740 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.616456 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.624054 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.649321 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.666191 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.683271 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.697858 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.708967 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.709013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.709024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.709043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.709058 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.712069 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.725261 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.741768 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.755265 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.766865 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.777950 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.790937 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.809190 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.810896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.810989 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.811006 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.811023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.811062 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.823192 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.835876 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.851318 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.863031 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.914049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.914103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.914116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.914134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:15 crc kubenswrapper[4687]: I0131 06:44:15.914147 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:15Z","lastTransitionTime":"2026-01-31T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.015871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.015909 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.015919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.015935 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.015946 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.118478 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.118520 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.118530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.118543 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.118552 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.221580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.221627 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.221641 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.221656 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.221668 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.324134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.324172 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.324180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.324194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.324203 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.426635 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.426683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.426695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.426711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.426720 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.528994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.529044 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.529057 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.529072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.529086 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.589543 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:23:36.35856652 +0000 UTC Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.631779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.631823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.631834 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.631849 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.631862 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.733776 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.733809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.733817 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.733831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.733843 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.836242 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.836299 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.836310 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.836326 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.836337 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.938791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.938847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.938858 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.938875 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:16 crc kubenswrapper[4687]: I0131 06:44:16.938888 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:16Z","lastTransitionTime":"2026-01-31T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.044003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.044069 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.044082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.044099 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.044111 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.051766 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/0.log" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.051823 4687 generic.go:334] "Generic (PLEG): container finished" podID="96c21054-65ed-4db4-969f-bbb10f612772" containerID="8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7" exitCode=1 Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.051857 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77mzd" event={"ID":"96c21054-65ed-4db4-969f-bbb10f612772","Type":"ContainerDied","Data":"8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.052306 4687 scope.go:117] "RemoveContainer" containerID="8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.064255 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.078042 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae46619-0b2b-4fa4-901a-c3b5c31fc100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d1e2abd6e78dbd3c9966ee4a6b5ae79d62dc6abacfbafdf7ebd9073e41ac56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.093469 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:16Z\\\",\\\"message\\\":\\\"2026-01-31T06:43:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef\\\\n2026-01-31T06:43:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef to /host/opt/cni/bin/\\\\n2026-01-31T06:43:31Z [verbose] multus-daemon started\\\\n2026-01-31T06:43:31Z [verbose] Readiness Indicator file check\\\\n2026-01-31T06:44:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.112605 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.136516 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.147191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.147236 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.147250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.147266 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.147310 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.151196 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.167180 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.182447 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.202181 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.220032 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.235369 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.249954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.250024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.250038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.250056 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.250068 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.251914 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.268217 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.281968 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.297114 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.310917 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.324945 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.337682 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.352595 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.352639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.352652 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.352668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.352680 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.455392 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.455458 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.455467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.455481 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.455489 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.557234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.557277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.557287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.557305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.557319 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.590567 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:51:30.413832129 +0000 UTC Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.602994 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.603024 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.603024 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.603063 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:17 crc kubenswrapper[4687]: E0131 06:44:17.603364 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:17 crc kubenswrapper[4687]: E0131 06:44:17.603514 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:17 crc kubenswrapper[4687]: E0131 06:44:17.603567 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:17 crc kubenswrapper[4687]: E0131 06:44:17.603634 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.659601 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.659871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.659977 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.660063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.660146 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.762230 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.762267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.762277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.762292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.762304 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.864640 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.864708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.864718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.864733 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.864744 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.966996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.967065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.967080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.967098 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:17 crc kubenswrapper[4687]: I0131 06:44:17.967112 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:17Z","lastTransitionTime":"2026-01-31T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.058711 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/0.log" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.058799 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77mzd" event={"ID":"96c21054-65ed-4db4-969f-bbb10f612772","Type":"ContainerStarted","Data":"f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.069845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.069886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.069897 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.069912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.069923 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.073997 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.087096 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.097148 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.107901 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.120862 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.130870 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.145140 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.157916 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae46619-0b2b-4fa4-901a-c3b5c31fc100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d1e2abd6e78dbd3c9966ee4a6b5ae79d62dc6abacfbafdf7ebd9073e41ac56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.171962 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.172015 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.172028 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.172047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.172059 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.176341 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:16Z\\\",\\\"message\\\":\\\"2026-01-31T06:43:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef\\\\n2026-01-31T06:43:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef to /host/opt/cni/bin/\\\\n2026-01-31T06:43:31Z [verbose] multus-daemon started\\\\n2026-01-31T06:43:31Z [verbose] Readiness Indicator file check\\\\n2026-01-31T06:44:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.191642 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.208989 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.220102 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.231973 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.243766 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.255055 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.265677 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.274265 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.274300 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.274308 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.274320 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.274331 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.275763 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.289228 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.376568 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.376887 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.376969 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.377068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.377151 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.479367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.479664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.479768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.479859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.479950 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.582613 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.582651 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.582659 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.582672 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.582681 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.591791 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:29:12.218789877 +0000 UTC Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.684772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.685058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.685149 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.685259 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.685363 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.787765 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.787843 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.787870 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.787901 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.787926 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.890141 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.890179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.890191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.890208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.890219 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.993341 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.993394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.993428 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.993449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:18 crc kubenswrapper[4687]: I0131 06:44:18.993462 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:18Z","lastTransitionTime":"2026-01-31T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.095955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.096018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.096039 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.096063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.096079 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.199040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.199092 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.199105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.199122 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.199137 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.301871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.301906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.301917 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.301933 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.301945 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.404537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.404599 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.404616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.404634 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.404648 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.507198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.507283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.507297 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.507312 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.507323 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.592480 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:18:16.861540544 +0000 UTC Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.602881 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.602958 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.602985 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:19 crc kubenswrapper[4687]: E0131 06:44:19.603026 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.603061 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:19 crc kubenswrapper[4687]: E0131 06:44:19.603170 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:19 crc kubenswrapper[4687]: E0131 06:44:19.603231 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:19 crc kubenswrapper[4687]: E0131 06:44:19.603284 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.608938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.608982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.608993 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.609007 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.609020 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.712065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.712127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.712149 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.712177 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.712198 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.814285 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.814319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.814328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.814350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.814365 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.916536 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.916578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.916588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.916604 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:19 crc kubenswrapper[4687]: I0131 06:44:19.916617 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:19Z","lastTransitionTime":"2026-01-31T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.018998 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.019043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.019054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.019071 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.019083 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.122014 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.122067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.122077 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.122094 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.122106 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.224481 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.224525 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.224537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.224554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.224566 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.327404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.327459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.327474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.327489 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.327500 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.431096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.431615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.431743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.431916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.432077 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.535257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.535313 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.535323 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.535344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.535362 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.593521 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:52:19.689746893 +0000 UTC Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.638157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.638208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.638217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.638236 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.638249 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.740331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.740374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.740387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.740403 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.740431 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.843284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.843325 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.843335 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.843350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.843362 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.945666 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.945756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.945771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.945795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:20 crc kubenswrapper[4687]: I0131 06:44:20.945812 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:20Z","lastTransitionTime":"2026-01-31T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.048772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.048803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.048812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.048825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.048834 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.151044 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.151096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.151106 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.151145 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.151161 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.253395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.253455 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.253463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.253475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.253484 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.356240 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.356290 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.356304 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.356318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.356330 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.458965 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.459024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.459036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.459065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.459076 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.561132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.561174 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.561185 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.561199 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.561212 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.594709 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:34:40.481497068 +0000 UTC Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.603149 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.603196 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.603301 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.603336 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.603449 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.603493 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.603555 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.604075 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.604201 4687 scope.go:117] "RemoveContainer" containerID="9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.663165 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.663482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.663495 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.663514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.663529 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.732877 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.732914 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.732923 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.732936 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.732944 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.745288 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.749504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.749559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.749573 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.749587 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.749913 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.765182 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.768879 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.768906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.768917 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.768932 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.768944 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.783401 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.788514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.788563 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.788574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.788591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.788605 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.803185 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.807040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.807086 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.807094 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.807109 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.807119 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.825112 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:21 crc kubenswrapper[4687]: E0131 06:44:21.825254 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.826896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.826927 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.826939 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.826955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.826966 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.929679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.929727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.929739 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.929756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:21 crc kubenswrapper[4687]: I0131 06:44:21.929767 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:21Z","lastTransitionTime":"2026-01-31T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.032449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.032490 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.032500 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.032516 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.032527 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.073935 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/2.log" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.077745 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.078248 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.100388 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.116214 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.132275 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.134803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.134851 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.134863 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.134882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.134893 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.142890 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.155699 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.169692 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.180807 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.196628 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae46619-0b2b-4fa4-901a-c3b5c31fc100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d1e2abd6e78dbd3c9966ee4a6b5ae79d62dc6abacfbafdf7ebd9073e41ac56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.221775 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:16Z\\\",\\\"message\\\":\\\"2026-01-31T06:43:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef\\\\n2026-01-31T06:43:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef to /host/opt/cni/bin/\\\\n2026-01-31T06:43:31Z [verbose] multus-daemon started\\\\n2026-01-31T06:43:31Z [verbose] Readiness Indicator file check\\\\n2026-01-31T06:44:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.237779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.238002 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.238102 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.238179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.238248 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.242500 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.261685 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.274340 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.289871 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.304286 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.318844 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.332865 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.341686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.341732 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.341744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.341762 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.341774 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.347703 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.359394 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.445283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.445332 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.445342 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.445358 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.445369 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.548300 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.548336 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.548352 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.548373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.548390 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.595717 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:12:30.915031138 +0000 UTC Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.650949 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.651021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.651035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.651053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.651067 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.753054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.753124 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.753137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.753151 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.753161 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.856512 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.856671 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.856695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.856733 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.856752 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.959165 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.959225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.959241 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.959263 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:22 crc kubenswrapper[4687]: I0131 06:44:22.959279 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:22Z","lastTransitionTime":"2026-01-31T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.062279 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.062334 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.062347 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.062371 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.062386 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.082336 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/3.log" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.083077 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/2.log" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.085931 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" exitCode=1 Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.085987 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.086043 4687 scope.go:117] "RemoveContainer" containerID="9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.086868 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 06:44:23 crc kubenswrapper[4687]: E0131 06:44:23.087071 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.101770 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.116697 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.138761 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.160098 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de09294474c0c73c2295ef9c25054092cdc9b456c1bcfccc273d9d565cbc9e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:43:55Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336726 6448 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0131 06:43:55.336734 6448 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0131 06:43:55.336746 6448 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 06:43:55.336686 6448 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0131 06:43:55.336768 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:22Z\\\",\\\"message\\\":\\\"pis/informers/externalversions/factory.go:140\\\\nI0131 06:44:22.381680 6885 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 06:44:22.381930 6885 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 06:44:22.381950 6885 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 06:44:22.381955 6885 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 06:44:22.381997 6885 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 06:44:22.382005 6885 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 06:44:22.382013 6885 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:44:22.382025 6885 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:44:22.382026 6885 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 06:44:22.382049 6885 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 06:44:22.382059 6885 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 06:44:22.382073 6885 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:44:22.382082 6885 factory.go:656] Stopping watch factory\\\\nI0131 06:44:22.382094 6885 ovnkube.go:599] Stopped ovnkube\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.164269 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.164321 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.164333 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.164348 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.164361 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.172037 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.185250 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae46619-0b2b-4fa4-901a-c3b5c31fc100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d1e2abd6e78dbd3c9966ee4a6b5ae79d62dc6abacfbafdf7ebd9073e41ac56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.201589 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:16Z\\\",\\\"message\\\":\\\"2026-01-31T06:43:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef\\\\n2026-01-31T06:43:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef to /host/opt/cni/bin/\\\\n2026-01-31T06:43:31Z [verbose] multus-daemon started\\\\n2026-01-31T06:43:31Z [verbose] Readiness Indicator file check\\\\n2026-01-31T06:44:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.217763 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.231651 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.244842 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.260702 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.266462 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.266503 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.266513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.266529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.266540 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.277179 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.289389 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.299546 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.312243 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.326919 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.339427 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.351449 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.369428 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.369480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.369490 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.369505 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.369514 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.472331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.472627 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.472789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.472894 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.472990 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.575931 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.575979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.575990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.576007 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.576018 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.596665 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:13:21.283007863 +0000 UTC Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.603132 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.603195 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.603262 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:23 crc kubenswrapper[4687]: E0131 06:44:23.603365 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.603395 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:23 crc kubenswrapper[4687]: E0131 06:44:23.603506 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:23 crc kubenswrapper[4687]: E0131 06:44:23.603597 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:23 crc kubenswrapper[4687]: E0131 06:44:23.603694 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.678227 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.678314 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.678330 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.678349 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.678360 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.780813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.780852 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.780865 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.780880 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.780891 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.883147 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.883510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.883605 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.883687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.883770 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.986837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.986898 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.986910 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.986928 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:23 crc kubenswrapper[4687]: I0131 06:44:23.986943 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:23Z","lastTransitionTime":"2026-01-31T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.089024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.089062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.089072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.089091 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.089102 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.092250 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/3.log" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.096494 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 06:44:24 crc kubenswrapper[4687]: E0131 06:44:24.096829 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.120448 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:22Z\\\",\\\"message\\\":\\\"pis/informers/externalversions/factory.go:140\\\\nI0131 06:44:22.381680 6885 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 06:44:22.381930 6885 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 06:44:22.381950 6885 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 06:44:22.381955 6885 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 06:44:22.381997 6885 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 06:44:22.382005 6885 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 06:44:22.382013 6885 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:44:22.382025 6885 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:44:22.382026 6885 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 06:44:22.382049 6885 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 06:44:22.382059 6885 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 06:44:22.382073 6885 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:44:22.382082 6885 factory.go:656] Stopping watch factory\\\\nI0131 06:44:22.382094 6885 ovnkube.go:599] Stopped ovnkube\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:44:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.133284 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.144619 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae46619-0b2b-4fa4-901a-c3b5c31fc100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d1e2abd6e78dbd3c9966ee4a6b5ae79d62dc6abacfbafdf7ebd9073e41ac56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.158786 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:16Z\\\",\\\"message\\\":\\\"2026-01-31T06:43:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef\\\\n2026-01-31T06:43:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef to /host/opt/cni/bin/\\\\n2026-01-31T06:43:31Z [verbose] multus-daemon started\\\\n2026-01-31T06:43:31Z [verbose] Readiness Indicator file check\\\\n2026-01-31T06:44:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.174490 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.188809 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.191223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.191531 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.191754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.191848 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.191911 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.199729 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.215091 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.229950 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.244128 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.261472 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.285211 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.294979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.295026 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.295040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.295058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.295072 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.302852 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.317074 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.330384 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.341986 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.359604 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.374209 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.396970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.397011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.397022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.397038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.397049 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.499839 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.500148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.500271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.500356 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.500470 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.596856 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:42:03.326622756 +0000 UTC Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.602506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.602540 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.602549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.602563 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.602576 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.705105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.705146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.705155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.705169 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.705179 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.808896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.809211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.809277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.809341 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.809440 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.911982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.912023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.912035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.912052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:24 crc kubenswrapper[4687]: I0131 06:44:24.912064 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:24Z","lastTransitionTime":"2026-01-31T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.015060 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.015108 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.015120 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.015134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.015145 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.118949 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.119041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.119065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.119096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.119118 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.221646 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.221689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.221701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.221716 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.221726 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.323823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.323867 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.323877 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.323891 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.323903 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.426004 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.426054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.426068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.426083 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.426093 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.528888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.528939 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.528947 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.528960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.528969 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.597678 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:15:08.022480804 +0000 UTC Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.603072 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.603111 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.603221 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:25 crc kubenswrapper[4687]: E0131 06:44:25.603232 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:25 crc kubenswrapper[4687]: E0131 06:44:25.603483 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:25 crc kubenswrapper[4687]: E0131 06:44:25.603634 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.603723 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:25 crc kubenswrapper[4687]: E0131 06:44:25.603899 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.621612 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.631662 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.631709 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.631720 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.631738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.631762 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.638652 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.650541 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.662861 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.673784 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.687139 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.700081 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.714693 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.726562 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae46619-0b2b-4fa4-901a-c3b5c31fc100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d1e2abd6e78dbd3c9966ee4a6b5ae79d62dc6abacfbafdf7ebd9073e41ac56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.734169 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.734449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.734558 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.734926 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.735067 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.740757 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:16Z\\\",\\\"message\\\":\\\"2026-01-31T06:43:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef\\\\n2026-01-31T06:43:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef to /host/opt/cni/bin/\\\\n2026-01-31T06:43:31Z [verbose] multus-daemon started\\\\n2026-01-31T06:43:31Z [verbose] Readiness Indicator file check\\\\n2026-01-31T06:44:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.758008 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.780311 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:22Z\\\",\\\"message\\\":\\\"pis/informers/externalversions/factory.go:140\\\\nI0131 06:44:22.381680 6885 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 06:44:22.381930 6885 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 06:44:22.381950 6885 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 06:44:22.381955 6885 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 06:44:22.381997 6885 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 06:44:22.382005 6885 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 06:44:22.382013 6885 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:44:22.382025 6885 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:44:22.382026 6885 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 06:44:22.382049 6885 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 06:44:22.382059 6885 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 06:44:22.382073 6885 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:44:22.382082 6885 factory.go:656] Stopping watch factory\\\\nI0131 06:44:22.382094 6885 ovnkube.go:599] Stopped ovnkube\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:44:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.795127 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.813056 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.828266 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.837360 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.837433 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.837443 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.837465 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.837478 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.842696 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.860133 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.875472 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.939563 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.940101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.940686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.941560 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:25 crc kubenswrapper[4687]: I0131 06:44:25.941622 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:25Z","lastTransitionTime":"2026-01-31T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.043959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.044006 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.044017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.044036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.044048 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.146271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.146317 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.146328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.146349 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.146362 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.248498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.248551 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.248565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.248583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.248597 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.351856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.351948 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.351975 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.352002 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.352019 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.454486 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.454588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.454603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.454628 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.454644 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.558301 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.558778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.558904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.558985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.559044 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.597883 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:14:00.009109894 +0000 UTC Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.662367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.662902 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.663053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.663136 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.663205 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.765875 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.766115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.766203 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.766309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.766377 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.871717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.871786 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.871797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.871831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.871841 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.974332 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.974380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.974395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.974436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:26 crc kubenswrapper[4687]: I0131 06:44:26.974448 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:26Z","lastTransitionTime":"2026-01-31T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.078050 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.078111 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.078121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.078145 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.078157 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.180665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.180744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.180763 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.180788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.180803 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.283188 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.283220 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.283227 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.283240 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.283248 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.385250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.385292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.385333 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.385348 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.385362 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.488306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.488363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.488374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.488394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.488428 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.493804 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.493919 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.493944 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.493916889 +0000 UTC m=+157.771176504 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.494024 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.494048 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.494094 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.494112 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.494095204 +0000 UTC m=+157.771354879 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.494149 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.494135725 +0000 UTC m=+157.771395400 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.591815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.591853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.591865 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.591886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.591900 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.594651 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.594708 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.594848 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.594868 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.594862 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.594924 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.594942 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.595002 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.594983058 +0000 UTC m=+157.872242753 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.594884 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.595074 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.59505697 +0000 UTC m=+157.872316615 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.598516 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:08:36.013035806 +0000 UTC Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.603000 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.603166 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.603012 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.603006 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.603262 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.603285 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.603467 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:27 crc kubenswrapper[4687]: E0131 06:44:27.603554 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.694495 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.694544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.694555 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.694572 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.694585 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.796640 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.796676 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.796686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.796702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.796717 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.899389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.899506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.899530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.899551 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:27 crc kubenswrapper[4687]: I0131 06:44:27.899565 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:27Z","lastTransitionTime":"2026-01-31T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.002803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.002867 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.002883 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.002905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.002917 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.106272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.106313 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.106326 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.106343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.106355 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.210182 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.210243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.210256 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.210275 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.210287 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.313707 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.313744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.313752 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.313772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.313783 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.417132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.417170 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.417180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.417194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.417204 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.519792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.519829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.519841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.519859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.519871 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.598634 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:36:43.799254227 +0000 UTC Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.623007 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.623052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.623063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.623079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.623090 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.725209 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.725246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.725255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.725267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.725275 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.827525 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.827560 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.827570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.827586 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.827598 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.930398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.930528 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.930555 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.930583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:28 crc kubenswrapper[4687]: I0131 06:44:28.930605 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:28Z","lastTransitionTime":"2026-01-31T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.033984 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.034042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.034052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.034071 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.034086 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.137144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.137221 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.137234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.137254 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.137266 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.239729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.239780 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.239794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.239811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.239826 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.342541 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.342590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.342600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.342616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.342628 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.445173 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.445212 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.445221 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.445237 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.445250 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.549874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.549938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.549950 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.549967 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.549978 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.599127 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:01:46.132142239 +0000 UTC Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.602519 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.602636 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.602792 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:29 crc kubenswrapper[4687]: E0131 06:44:29.602770 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:29 crc kubenswrapper[4687]: E0131 06:44:29.602866 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:29 crc kubenswrapper[4687]: E0131 06:44:29.602944 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.602599 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:29 crc kubenswrapper[4687]: E0131 06:44:29.603200 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.652080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.652130 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.652145 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.652162 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.652173 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.754721 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.754805 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.754820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.754845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.754860 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.857159 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.857222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.857235 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.857252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.857263 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.960336 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.960384 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.960398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.960442 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:29 crc kubenswrapper[4687]: I0131 06:44:29.960455 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:29Z","lastTransitionTime":"2026-01-31T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.062892 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.062997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.063017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.063047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.063060 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.166636 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.166705 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.166720 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.166744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.166760 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.269791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.269846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.269862 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.269880 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.269895 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.372367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.372441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.372453 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.372470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.372484 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.474957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.475011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.475024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.475041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.475053 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.577437 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.577482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.577493 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.577510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.577525 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.600300 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:51:50.426425156 +0000 UTC Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.680514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.680566 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.680577 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.680593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.680604 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.783552 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.783645 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.783658 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.783683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.783699 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.886608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.886884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.886985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.887481 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.887581 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.991503 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.991561 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.991572 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.991594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:30 crc kubenswrapper[4687]: I0131 06:44:30.991608 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:30Z","lastTransitionTime":"2026-01-31T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.094899 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.094978 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.094987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.095016 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.095037 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.197022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.197076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.197087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.197102 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.197114 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.299953 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.300000 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.300012 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.300029 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.300040 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.403469 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.403532 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.403548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.403564 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.403577 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.506813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.506859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.506867 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.506883 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.506894 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.601245 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:30:53.104337074 +0000 UTC Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.602590 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.602644 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.602652 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.602727 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:31 crc kubenswrapper[4687]: E0131 06:44:31.602745 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:31 crc kubenswrapper[4687]: E0131 06:44:31.602820 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:31 crc kubenswrapper[4687]: E0131 06:44:31.602895 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:31 crc kubenswrapper[4687]: E0131 06:44:31.602979 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.608629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.608694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.608708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.608736 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.608751 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.711167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.711486 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.711495 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.711511 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.711520 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.814037 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.814097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.814106 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.814121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.814132 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.917087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.917146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.917160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.917179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:31 crc kubenswrapper[4687]: I0131 06:44:31.917192 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:31Z","lastTransitionTime":"2026-01-31T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.020482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.020552 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.020574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.020604 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.020625 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.029607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.029668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.029690 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.029717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.029740 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: E0131 06:44:32.043134 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.047926 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.047959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.047970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.047987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.048000 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: E0131 06:44:32.060260 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.064112 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.064155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.064164 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.064178 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.064186 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: E0131 06:44:32.076338 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.080537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.080580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.080592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.080609 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.080621 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: E0131 06:44:32.093373 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.097276 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.097310 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.097321 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.097373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.097387 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: E0131 06:44:32.111374 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:32 crc kubenswrapper[4687]: E0131 06:44:32.111517 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.122744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.122802 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.122813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.122829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.122840 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.225272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.225317 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.225328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.225345 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.225356 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.327657 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.327687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.327718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.327731 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.327740 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.429837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.429882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.429895 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.429914 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.429926 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.532620 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.532677 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.532694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.532718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.532736 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.601775 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:54:42.326304175 +0000 UTC Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.634897 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.634937 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.634945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.634957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.634966 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.738332 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.738396 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.738429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.738448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.738462 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.840881 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.840920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.840931 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.840948 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.840958 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.943270 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.943318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.943329 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.943346 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:32 crc kubenswrapper[4687]: I0131 06:44:32.943360 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:32Z","lastTransitionTime":"2026-01-31T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.046171 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.046214 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.046226 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.046240 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.046251 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.148789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.148833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.148843 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.148858 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.148870 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.251388 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.251484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.251496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.251517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.251530 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.354228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.354275 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.354301 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.354326 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.354339 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.457160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.457199 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.457214 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.457231 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.457242 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.559660 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.559698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.559709 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.559723 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.559734 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.603009 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:10:32.913460301 +0000 UTC Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.603024 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.603476 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:33 crc kubenswrapper[4687]: E0131 06:44:33.604112 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.603528 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:33 crc kubenswrapper[4687]: E0131 06:44:33.604193 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.603489 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:33 crc kubenswrapper[4687]: E0131 06:44:33.604011 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:33 crc kubenswrapper[4687]: E0131 06:44:33.604269 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.624880 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.662862 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.662911 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.662921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.662939 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.662951 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.766076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.766138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.766150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.766167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.766178 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.868440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.868488 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.868502 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.868516 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.868525 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.970915 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.971179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.971306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.971390 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:33 crc kubenswrapper[4687]: I0131 06:44:33.971498 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:33Z","lastTransitionTime":"2026-01-31T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.074608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.074661 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.074679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.074701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.074719 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.176869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.176916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.176928 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.176944 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.176955 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.279888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.279942 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.279960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.279980 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.279992 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.382230 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.382278 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.382289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.382307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.382319 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.485004 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.485042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.485050 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.485063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.485072 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.587963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.588002 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.588010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.588025 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.588034 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.604824 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:48:35.476195072 +0000 UTC Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.690630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.690673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.690688 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.690708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.690722 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.792971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.793010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.793022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.793037 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.793048 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.896041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.896306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.896395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.896518 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.896612 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.999761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.999818 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.999829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:34 crc kubenswrapper[4687]: I0131 06:44:34.999847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:34.999861 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:34Z","lastTransitionTime":"2026-01-31T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.102792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.102831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.102841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.102857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.102869 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.205432 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.205467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.205476 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.205491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.205500 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.308065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.308105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.308115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.308134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.308147 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.410866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.410903 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.410913 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.410926 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.410936 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.512496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.512762 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.512845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.512959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.513064 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.603070 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.603123 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.603129 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.603084 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:35 crc kubenswrapper[4687]: E0131 06:44:35.603292 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:35 crc kubenswrapper[4687]: E0131 06:44:35.603383 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:35 crc kubenswrapper[4687]: E0131 06:44:35.603463 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:35 crc kubenswrapper[4687]: E0131 06:44:35.603849 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.605367 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:04:19.482739708 +0000 UTC Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.615844 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.615897 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.615910 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.615927 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.615939 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.620387 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee039356-c458-45b0-84a6-c533eec8da86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 06:43:23.248102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 06:43:23.248252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 06:43:23.249167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3962814661/tls.crt::/tmp/serving-cert-3962814661/tls.key\\\\\\\"\\\\nI0131 06:43:23.594299 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 06:43:23.598356 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 06:43:23.598372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 06:43:23.598389 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 06:43:23.598395 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 06:43:23.605472 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 06:43:23.605713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605724 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 06:43:23.605741 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 06:43:23.605748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 06:43:23.605755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 06:43:23.605761 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 06:43:23.605806 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 06:43:23.607739 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.634297 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc8f4e43-42ed-4239-b5a6-eca8637c56b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04708ef780ef52b3d46cde05dccdfd279e35e95361c04c6ca6eb53e5dc4f28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f335bab373ebe2b12635eef00d6ce30aab02a4030f539693d2f422cd1dcabd1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9c1d6ab561a8059406b2536ed128acb558aba72e0b7193cfef337212caf0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.646280 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.660056 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.674277 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.686210 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dead0f10-2469-49a4-8d26-93fc90d6451d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scf27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbxj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.700455 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a933e3df-17c7-4b7a-b8a4-f9fdfc1f9116\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177e7d2f12d88ec65f696b1500003d22464529c2a37000255906cb4e3598c69e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196dcd26b49160f379dc443180c93bb1148dd0d2977f516ff48a81d618e3caef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9f67c3167938a802b72b7df28c4cf024417f605419ea56865822746cceed27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e671b48b6b48982fa0f81a88aea7f4fa795cfb58a09b25d9f02315a0ad2183\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.715957 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04611a4a55f0da316a7f8e063d2703e5ae90bc7e3598bbfdce59fabc05d134ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.717884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.718043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.718150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.718255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.718342 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.734080 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0586ba394d9139a31c42545d43ebb2f360eb8aa1f1fc9dcd20fcde18cc01ae6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.746152 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sv5n6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4abe4f-d012-452a-81cf-6e96ec9a8dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1edf3b7c2da22d5befd664fab8bd8e27d6ff3789245f06b3906528a461bb705a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxb4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sv5n6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.760250 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c340f403-35a5-4c6d-80b0-2e0fe7399192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7b167e5335f3af1312762607d2cb040c21edb11786fe0bbfa8fbc15f851b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4tdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkgkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.773839 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80931922f22acc435dbacea830079acb1efb4782ffef8e74b186d929391f2ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96ce7f2d1e31321decea2dc16d5f6d818d3eb0dff917a9ffb112b8add0ee8e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.785945 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bffca17-c223-4bd0-b78c-a5b059413223\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c95fc7ae3e948c99b17f36a84b15dd921969168a3e30de3d1afbc2c5b40476f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471c610ac5099adb133f88c53cc5652dd61ebf82ac79c8eb27e0842b7c4ae63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tddl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptfrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.807238 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66f37ff8-20a3-41fb-ad38-bc90d60fd416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c45b5bfe32874defddedac767bd3036e9eaf7d5ba834c72df4794f02f2c5b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6dd1f7483763e9d97523d97e0140717ed7235ef957ccaf55487009d05af1062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1ff47b0d742f3d52f844b6fb1dd5e246f7c4bb9a73efbc92658996e0359c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e9c2fbc1edcab9febc9b058bcc455ca1285f437802a1309fc03eda8568fe9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2534273656fd36d31225ae4887c84efb05bb47dada0112a741fc54e98b526084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5e0c8f592467b911ddb1f973d61e7d2c044ae3d857e34ea7fa92e24f0c47ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5e0c8f592467b911ddb1f973d61e7d2c044ae3d857e34ea7fa92e24f0c47ec3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed7f156538831ab606fc24563177014cb2ebb140d38cf8809e3af8b17a64c548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed7f156538831ab606fc24563177014cb2ebb140d38cf8809e3af8b17a64c548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d53b53feb14fb79c0e2a976021459a6662af87f8d700386477ebe8f9837f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d53b53feb14fb79c0e2a976021459a6662af87f8d700386477ebe8f9837f42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.819046 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae46619-0b2b-4fa4-901a-c3b5c31fc100\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d1e2abd6e78dbd3c9966ee4a6b5ae79d62dc6abacfbafdf7ebd9073e41ac56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45cffdf8b32d48ef8e1cfbb902cccbced8fa7876fddc2547eeeaa7c48fa90e79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:42:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.820686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.820794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.820889 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.820998 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.821103 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.834939 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77mzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96c21054-65ed-4db4-969f-bbb10f612772\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:16Z\\\",\\\"message\\\":\\\"2026-01-31T06:43:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef\\\\n2026-01-31T06:43:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a2da230a-2c2a-42da-98e3-86d9a47550ef to /host/opt/cni/bin/\\\\n2026-01-31T06:43:31Z [verbose] multus-daemon started\\\\n2026-01-31T06:43:31Z [verbose] Readiness Indicator file check\\\\n2026-01-31T06:44:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77mzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.852490 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d57913d8-5742-4fd2-925b-6721231e7863\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38d86db47611b1cadbd6b6e1718220919a14c45b4a184956afc517bef11487a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc8acc8ec386f837895063649ee2c2476ce260fd9042e886e79535890f5b4e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1a485ec5f5293e1e547967603f76b4ab98b29bf78d11042d56aff4a55fadd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dae2e152f0a197c67d440205f432e4f5f60d9f8fb2c874f3f8a91fcffcf9699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c68c05b1f6a51eef7e426cf251e0c164fcc05b8c0dcbaf6674b214548395241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa688073ccb3a605b9f8760c7cc7c747c4256fa2e5e0fcf3ca0534a8fdb87af0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f96ab4987b308a73c00a3f395ce7bb87b385c3bc415bea9212df544312a772d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drwlj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlk4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.874210 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T06:44:22Z\\\",\\\"message\\\":\\\"pis/informers/externalversions/factory.go:140\\\\nI0131 06:44:22.381680 6885 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 06:44:22.381930 6885 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 06:44:22.381950 6885 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 06:44:22.381955 6885 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 06:44:22.381997 6885 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 06:44:22.382005 6885 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 06:44:22.382013 6885 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 06:44:22.382025 6885 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 06:44:22.382026 6885 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 06:44:22.382049 6885 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 06:44:22.382059 6885 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 06:44:22.382073 6885 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 06:44:22.382082 6885 factory.go:656] Stopping watch factory\\\\nI0131 06:44:22.382094 6885 ovnkube.go:599] Stopped ovnkube\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T06:44:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T06:43:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T06:43:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9ts2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zvpgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.886947 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bfpqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83663f48-cbeb-4689-ad08-405a1d894791\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T06:43:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f042adf44d170535697e99668f207236e042c377508e5a575fa8b4fed15819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T06:43:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nq48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T06:43:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bfpqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.923394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.923460 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.923474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.923492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:35 crc kubenswrapper[4687]: I0131 06:44:35.923505 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:35Z","lastTransitionTime":"2026-01-31T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.025631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.025674 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.025684 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.025699 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.025708 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.127794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.127855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.127867 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.127885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.127898 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.230463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.230630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.230661 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.230687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.230706 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.333670 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.333707 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.333718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.333734 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.333747 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.435649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.435681 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.435689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.435702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.435712 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.537616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.537654 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.537665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.537678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.537691 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.606361 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:29:23.422722015 +0000 UTC Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.639952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.640014 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.640027 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.640042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.640051 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.742251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.742316 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.742331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.742347 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.742359 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.845137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.845182 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.845194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.845211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.845225 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.947758 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.947811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.947821 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.947838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:36 crc kubenswrapper[4687]: I0131 06:44:36.947850 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:36Z","lastTransitionTime":"2026-01-31T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.050052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.050121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.050134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.050152 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.050162 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.151712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.151756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.151769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.151787 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.151801 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.254497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.254549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.254562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.254582 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.254595 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.357286 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.357328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.357338 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.357350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.357359 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.459693 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.459741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.459752 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.459771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.459782 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.562304 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.562363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.562372 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.562387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.562396 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.602945 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.602984 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.602989 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:37 crc kubenswrapper[4687]: E0131 06:44:37.603103 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.603164 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:37 crc kubenswrapper[4687]: E0131 06:44:37.603472 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:37 crc kubenswrapper[4687]: E0131 06:44:37.603632 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:37 crc kubenswrapper[4687]: E0131 06:44:37.603382 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.606794 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:19:56.963667464 +0000 UTC Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.664233 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.664519 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.664630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.664724 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.664795 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.767622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.767722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.767740 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.767801 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.767819 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.870617 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.870693 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.870703 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.870717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.870726 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.973397 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.973467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.973480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.973498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:37 crc kubenswrapper[4687]: I0131 06:44:37.973513 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:37Z","lastTransitionTime":"2026-01-31T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.075985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.076068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.076080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.076095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.076106 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.177769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.177800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.177812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.177826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.177837 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.279824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.279983 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.280002 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.280021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.280033 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.383088 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.383141 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.383157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.383175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.383189 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.485749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.485793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.485829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.485851 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.485864 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.588523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.588586 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.588598 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.588622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.588636 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.603959 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 06:44:38 crc kubenswrapper[4687]: E0131 06:44:38.604202 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.606913 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:04:46.225878568 +0000 UTC Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.691744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.691803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.691815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.691832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.691843 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.795393 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.795504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.795538 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.795561 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.795574 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.897957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.897999 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.898008 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.898022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:38 crc kubenswrapper[4687]: I0131 06:44:38.898033 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:38Z","lastTransitionTime":"2026-01-31T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.001529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.001579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.001590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.001608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.001619 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.103476 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.103514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.103521 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.103534 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.103542 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.205794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.205844 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.205856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.205874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.205919 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.308727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.308788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.308801 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.308816 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.308825 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.411824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.411905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.411919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.411944 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.411958 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.514348 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.514387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.514398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.514434 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.514447 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.603445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.603510 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.603519 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.603451 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:39 crc kubenswrapper[4687]: E0131 06:44:39.603574 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:39 crc kubenswrapper[4687]: E0131 06:44:39.603633 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:39 crc kubenswrapper[4687]: E0131 06:44:39.603685 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:39 crc kubenswrapper[4687]: E0131 06:44:39.603763 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.607321 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:15:03.541746011 +0000 UTC Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.616441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.616484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.616496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.616513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.616525 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.718911 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.718961 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.718976 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.718996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.719013 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.821125 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.821167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.821181 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.821198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.821210 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.924380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.924432 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.924442 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.924456 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:39 crc kubenswrapper[4687]: I0131 06:44:39.924466 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:39Z","lastTransitionTime":"2026-01-31T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.026832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.026884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.026897 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.026911 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.026920 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.129147 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.129181 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.129191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.129203 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.129211 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.233054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.233104 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.233121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.233140 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.233156 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.336261 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.336314 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.336329 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.336347 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.336358 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.439201 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.439240 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.439251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.439266 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.439276 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.541975 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.542023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.542032 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.542045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.542054 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.608314 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:21:40.021492368 +0000 UTC Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.645185 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.645260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.645283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.645313 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.645333 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.749678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.749714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.749723 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.749740 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.749751 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.852593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.852653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.852663 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.852684 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.852693 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.955642 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.955693 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.955716 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.955733 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:40 crc kubenswrapper[4687]: I0131 06:44:40.955745 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:40Z","lastTransitionTime":"2026-01-31T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.059142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.059208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.059228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.059246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.059284 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.161562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.161600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.161610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.161622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.161632 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.263548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.263606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.263623 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.263644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.263662 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.365649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.365687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.365695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.365708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.365717 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.468032 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.468073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.468083 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.468101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.468110 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.570769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.570810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.570819 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.570832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.570842 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.602729 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:41 crc kubenswrapper[4687]: E0131 06:44:41.603107 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.602862 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:41 crc kubenswrapper[4687]: E0131 06:44:41.603342 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.602789 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:41 crc kubenswrapper[4687]: E0131 06:44:41.603563 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.602936 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:41 crc kubenswrapper[4687]: E0131 06:44:41.603885 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.608461 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:50:27.684345806 +0000 UTC Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.672958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.673000 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.673012 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.673029 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.673040 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.775283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.775316 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.775326 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.775340 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.775349 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.877977 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.878036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.878058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.878087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.878109 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.980373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.980434 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.980447 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.980464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:41 crc kubenswrapper[4687]: I0131 06:44:41.980475 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:41Z","lastTransitionTime":"2026-01-31T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.082888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.082942 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.082952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.082968 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.082977 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.185089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.185121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.185131 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.185144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.185154 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.288354 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.288629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.288668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.288703 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.288743 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.291692 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.291759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.291770 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.291788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.291799 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: E0131 06:44:42.307629 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.312775 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.312824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.312838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.312860 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.312876 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: E0131 06:44:42.328712 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.332374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.332441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.332453 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.332474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.332489 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: E0131 06:44:42.348233 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.351186 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.351306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.351374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.351466 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.351565 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: E0131 06:44:42.366696 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.370697 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.370741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.370749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.370765 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.370775 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: E0131 06:44:42.384183 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T06:44:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8941b06a-1ce8-4ac7-98bb-d1b5c4b40f67\\\",\\\"systemUUID\\\":\\\"0982288b-de9c-4e82-b208-7781320b1d02\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T06:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 06:44:42 crc kubenswrapper[4687]: E0131 06:44:42.384305 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.390722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.390746 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.390756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.390773 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.390786 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.493451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.493494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.493510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.493530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.493546 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.595514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.595552 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.595562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.595576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.595588 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.609306 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:55:02.014222772 +0000 UTC Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.698655 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.698691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.698701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.698716 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.698725 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.801802 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.801846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.801861 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.801884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.801894 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.906062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.906104 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.906114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.906135 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:42 crc kubenswrapper[4687]: I0131 06:44:42.906156 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:42Z","lastTransitionTime":"2026-01-31T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.009667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.009739 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.009750 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.009766 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.009778 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.112150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.112202 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.112215 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.112230 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.112243 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.214673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.214730 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.214744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.214761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.214770 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.317326 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.317381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.317396 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.317432 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.317446 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.420309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.420366 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.420378 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.420393 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.420435 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.522792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.522909 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.523033 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.523049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.523062 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.602851 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.602981 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.603014 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:43 crc kubenswrapper[4687]: E0131 06:44:43.603129 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.603221 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:43 crc kubenswrapper[4687]: E0131 06:44:43.603348 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:43 crc kubenswrapper[4687]: E0131 06:44:43.603383 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:43 crc kubenswrapper[4687]: E0131 06:44:43.603823 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.609756 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:01:03.09149336 +0000 UTC Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.625746 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.625791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.625800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.625815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.625824 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.728239 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.728311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.728322 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.728340 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.728352 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.830622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.830680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.830694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.830711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.830724 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.933777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.933819 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.933829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.933842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:43 crc kubenswrapper[4687]: I0131 06:44:43.933853 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:43Z","lastTransitionTime":"2026-01-31T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.037122 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.037184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.037200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.037225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.037242 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.139530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.139598 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.139608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.139633 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.139651 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.242466 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.242517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.242527 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.242547 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.242560 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.345901 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.345954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.345963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.345983 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.345993 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.448533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.448570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.448582 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.448597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.448627 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.551380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.551433 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.551445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.551461 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.551473 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.610944 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:56:04.253778482 +0000 UTC Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.654138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.654186 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.654198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.654214 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.654226 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.757035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.757089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.757098 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.757114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.757124 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.862048 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.862116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.862134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.862158 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.862182 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.965290 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.965332 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.965347 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.965363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:44 crc kubenswrapper[4687]: I0131 06:44:44.965374 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:44Z","lastTransitionTime":"2026-01-31T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.067612 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.067643 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.067651 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.067665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.067674 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.170025 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.170056 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.170065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.170078 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.170087 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.272341 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.272386 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.272401 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.272437 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.272451 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.374377 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.374429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.374440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.374455 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.374468 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.476638 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.476680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.476688 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.476701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.476710 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.578642 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.578686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.578698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.578713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.578725 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.602450 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.602484 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:45 crc kubenswrapper[4687]: E0131 06:44:45.602659 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.602704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.602735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:45 crc kubenswrapper[4687]: E0131 06:44:45.602924 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:45 crc kubenswrapper[4687]: E0131 06:44:45.603006 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:45 crc kubenswrapper[4687]: E0131 06:44:45.603137 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.611293 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:06:58.492043441 +0000 UTC Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.644883 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.644854877 podStartE2EDuration="12.644854877s" podCreationTimestamp="2026-01-31 06:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.644454986 +0000 UTC m=+111.921714561" watchObservedRunningTime="2026-01-31 06:44:45.644854877 +0000 UTC m=+111.922114452" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.645097 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bfpqq" podStartSLOduration=78.645090383 podStartE2EDuration="1m18.645090383s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.621365124 +0000 UTC m=+111.898624719" watchObservedRunningTime="2026-01-31 06:44:45.645090383 +0000 UTC m=+111.922349958" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.657712 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.657691837 podStartE2EDuration="30.657691837s" podCreationTimestamp="2026-01-31 06:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.657580354 +0000 UTC m=+111.934839929" watchObservedRunningTime="2026-01-31 06:44:45.657691837 +0000 UTC m=+111.934951412" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.681456 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.681485 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.681494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.681507 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.681515 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.690528 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-77mzd" podStartSLOduration=78.690510597 podStartE2EDuration="1m18.690510597s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.671681668 +0000 UTC m=+111.948941243" watchObservedRunningTime="2026-01-31 06:44:45.690510597 +0000 UTC m=+111.967770172" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.712630 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jlk4z" podStartSLOduration=78.712600902 podStartE2EDuration="1m18.712600902s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.691133863 +0000 UTC m=+111.968393438" watchObservedRunningTime="2026-01-31 06:44:45.712600902 +0000 UTC m=+111.989860517" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.738196 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.73817863 podStartE2EDuration="1m21.73817863s" podCreationTimestamp="2026-01-31 06:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.738040757 +0000 UTC m=+112.015300342" watchObservedRunningTime="2026-01-31 06:44:45.73817863 +0000 UTC m=+112.015438205" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.766329 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.766310256 podStartE2EDuration="1m22.766310256s" podCreationTimestamp="2026-01-31 06:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.750559749 +0000 UTC m=+112.027819324" watchObservedRunningTime="2026-01-31 06:44:45.766310256 +0000 UTC m=+112.043569851" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.784067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.784116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.784129 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.784146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.784159 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.802388 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.802366002 podStartE2EDuration="49.802366002s" podCreationTimestamp="2026-01-31 06:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.801747475 +0000 UTC m=+112.079007070" watchObservedRunningTime="2026-01-31 06:44:45.802366002 +0000 UTC m=+112.079625577" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.845290 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-sv5n6" podStartSLOduration=78.845272629 podStartE2EDuration="1m18.845272629s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.844910719 +0000 UTC m=+112.122170294" watchObservedRunningTime="2026-01-31 06:44:45.845272629 +0000 UTC m=+112.122532204" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.856607 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podStartSLOduration=78.856589069 podStartE2EDuration="1m18.856589069s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.856431235 +0000 UTC m=+112.133690840" watchObservedRunningTime="2026-01-31 06:44:45.856589069 +0000 UTC m=+112.133848644" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.886958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.886989 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.886997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.887010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.887018 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.913006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:45 crc kubenswrapper[4687]: E0131 06:44:45.913204 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:44:45 crc kubenswrapper[4687]: E0131 06:44:45.913268 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs podName:dead0f10-2469-49a4-8d26-93fc90d6451d nodeName:}" failed. No retries permitted until 2026-01-31 06:45:49.913244621 +0000 UTC m=+176.190504196 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs") pod "network-metrics-daemon-hbxj7" (UID: "dead0f10-2469-49a4-8d26-93fc90d6451d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.989018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.989068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.989080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.989095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:45 crc kubenswrapper[4687]: I0131 06:44:45.989107 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:45Z","lastTransitionTime":"2026-01-31T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.091708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.091741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.091749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.091760 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.091769 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.194202 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.194241 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.194249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.194265 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.194275 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.296948 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.297072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.297089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.297112 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.297123 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.399146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.399187 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.399199 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.399216 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.399228 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.501076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.501127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.501139 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.501155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.501166 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.604388 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.604477 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.604489 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.604510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.604524 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.612395 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:19:11.136343531 +0000 UTC Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.708756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.708814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.708840 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.708872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.708896 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.811595 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.811636 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.811644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.811659 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.811668 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.914950 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.914985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.914996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.915012 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:46 crc kubenswrapper[4687]: I0131 06:44:46.915022 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:46Z","lastTransitionTime":"2026-01-31T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.017372 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.017423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.017434 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.017449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.017460 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.120074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.120105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.120117 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.120133 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.120144 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.222768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.222830 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.222909 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.222936 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.222954 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.325009 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.325058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.325073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.325089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.325102 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.428013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.428056 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.428065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.428086 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.428105 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.531777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.531825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.531855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.531872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.531883 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.603391 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.603489 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.603556 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:47 crc kubenswrapper[4687]: E0131 06:44:47.603561 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:47 crc kubenswrapper[4687]: E0131 06:44:47.603631 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.603723 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:47 crc kubenswrapper[4687]: E0131 06:44:47.603788 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:47 crc kubenswrapper[4687]: E0131 06:44:47.603950 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.613370 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:03:28.282387929 +0000 UTC Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.634514 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.634599 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.634615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.634639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.634660 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.737466 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.737506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.737517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.737530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.737539 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.840230 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.840261 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.840270 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.840284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.840293 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.942958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.943003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.943016 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.943040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:47 crc kubenswrapper[4687]: I0131 06:44:47.943054 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:47Z","lastTransitionTime":"2026-01-31T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.045069 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.045115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.045127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.045142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.045152 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.148399 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.148470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.148480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.148496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.148506 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.251603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.251653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.251668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.251685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.251699 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.355287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.355373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.355398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.355470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.355495 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.457719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.457780 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.457798 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.457842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.457854 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.559788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.559824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.559835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.559850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.559860 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.613905 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:38:51.902787018 +0000 UTC Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.662772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.662850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.662869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.662920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.662941 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.765756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.765816 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.765835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.765856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.765874 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.868410 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.868506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.868520 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.868542 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.868557 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.971439 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.971480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.971489 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.971504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:48 crc kubenswrapper[4687]: I0131 06:44:48.971514 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:48Z","lastTransitionTime":"2026-01-31T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.073904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.074148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.074158 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.074172 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.074182 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.177139 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.177185 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.177200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.177221 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.177234 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.279737 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.279790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.279804 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.279821 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.279834 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.383229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.383277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.383287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.383302 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.383311 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.485768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.485836 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.485853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.485878 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.485896 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.588729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.588806 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.588818 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.588854 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.588871 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.603460 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.603606 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.603639 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.603691 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:49 crc kubenswrapper[4687]: E0131 06:44:49.603718 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:49 crc kubenswrapper[4687]: E0131 06:44:49.603838 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:49 crc kubenswrapper[4687]: E0131 06:44:49.604170 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:49 crc kubenswrapper[4687]: E0131 06:44:49.604441 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.614374 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:10:56.932400267 +0000 UTC Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.691756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.691820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.691833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.691857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.691873 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.795266 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.795341 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.795354 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.795377 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.795392 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.899646 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.899735 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.899939 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.899964 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:49 crc kubenswrapper[4687]: I0131 06:44:49.899982 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:49Z","lastTransitionTime":"2026-01-31T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.002929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.003196 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.003301 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.003443 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.003521 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.107151 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.107208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.107231 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.107258 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.107280 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.210404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.210509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.210526 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.210544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.210557 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.313254 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.313295 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.313306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.313323 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.313335 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.416031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.416060 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.416068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.416085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.416095 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.518679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.518723 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.518739 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.518753 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.518763 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.603215 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 06:44:50 crc kubenswrapper[4687]: E0131 06:44:50.603364 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zvpgn_openshift-ovn-kubernetes(55484aa7-5d82-4f2e-ab22-2ceae9c90c96)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.614596 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:08:17.206320044 +0000 UTC Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.621599 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.621689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.621709 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.621735 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.621748 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.730656 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.730729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.730755 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.730800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.730823 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.833211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.833262 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.833274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.833292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.833307 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.936066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.936109 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.936118 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.936134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:50 crc kubenswrapper[4687]: I0131 06:44:50.936143 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:50Z","lastTransitionTime":"2026-01-31T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.038217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.038259 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.038270 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.038286 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.038297 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.141407 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.142063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.142083 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.142099 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.142113 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.244945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.245013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.245025 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.245047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.245067 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.347733 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.347787 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.347804 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.347825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.347842 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.451397 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.451482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.451494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.451512 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.451525 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.556155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.556217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.556229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.556252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.556266 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.602752 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.602761 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:51 crc kubenswrapper[4687]: E0131 06:44:51.602940 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.602712 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.603015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:51 crc kubenswrapper[4687]: E0131 06:44:51.603351 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:51 crc kubenswrapper[4687]: E0131 06:44:51.603601 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:51 crc kubenswrapper[4687]: E0131 06:44:51.603641 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.615554 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:03:16.234661894 +0000 UTC Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.659838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.659874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.659885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.659897 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.659907 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.762973 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.763017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.763030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.763048 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.763060 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.865930 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.865995 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.866010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.866027 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.866038 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.969440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.969492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.969505 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.969523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:51 crc kubenswrapper[4687]: I0131 06:44:51.969535 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:51Z","lastTransitionTime":"2026-01-31T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.072797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.072847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.072865 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.072886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.072901 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:52Z","lastTransitionTime":"2026-01-31T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.175803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.175840 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.175904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.175924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.175935 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:52Z","lastTransitionTime":"2026-01-31T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.277897 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.277932 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.277949 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.277971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.277986 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:52Z","lastTransitionTime":"2026-01-31T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.381128 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.381184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.381196 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.381213 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.381225 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:52Z","lastTransitionTime":"2026-01-31T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.483801 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.483843 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.483857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.483875 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.483890 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:52Z","lastTransitionTime":"2026-01-31T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.586991 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.587306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.587446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.587609 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.587714 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:52Z","lastTransitionTime":"2026-01-31T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.616545 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:43:53.253793535 +0000 UTC Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.690535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.690576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.690589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.690606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.690617 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:52Z","lastTransitionTime":"2026-01-31T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.750014 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.750061 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.750073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.750088 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.750104 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T06:44:52Z","lastTransitionTime":"2026-01-31T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.801964 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptfrf" podStartSLOduration=84.801944481 podStartE2EDuration="1m24.801944481s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:45.91133551 +0000 UTC m=+112.188595085" watchObservedRunningTime="2026-01-31 06:44:52.801944481 +0000 UTC m=+119.079204056" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.802125 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j"] Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.802501 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.805388 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.805514 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.805607 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.805680 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.884977 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/365b9b91-de3f-4b99-9474-29ed95c2a985-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.885136 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/365b9b91-de3f-4b99-9474-29ed95c2a985-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.885247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/365b9b91-de3f-4b99-9474-29ed95c2a985-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.885338 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/365b9b91-de3f-4b99-9474-29ed95c2a985-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.885360 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/365b9b91-de3f-4b99-9474-29ed95c2a985-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.986181 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/365b9b91-de3f-4b99-9474-29ed95c2a985-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.986248 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/365b9b91-de3f-4b99-9474-29ed95c2a985-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.986271 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/365b9b91-de3f-4b99-9474-29ed95c2a985-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.986285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/365b9b91-de3f-4b99-9474-29ed95c2a985-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.986304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/365b9b91-de3f-4b99-9474-29ed95c2a985-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.986376 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/365b9b91-de3f-4b99-9474-29ed95c2a985-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.986471 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/365b9b91-de3f-4b99-9474-29ed95c2a985-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.987388 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/365b9b91-de3f-4b99-9474-29ed95c2a985-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:52 crc kubenswrapper[4687]: I0131 06:44:52.992787 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/365b9b91-de3f-4b99-9474-29ed95c2a985-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.005171 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/365b9b91-de3f-4b99-9474-29ed95c2a985-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwq4j\" (UID: \"365b9b91-de3f-4b99-9474-29ed95c2a985\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.119504 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.182055 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" event={"ID":"365b9b91-de3f-4b99-9474-29ed95c2a985","Type":"ContainerStarted","Data":"bd5d5aa65a077c5c323e0ab9dee8a9ba759434bcde520d17df1803a39fe152f9"} Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.602681 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.602681 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:53 crc kubenswrapper[4687]: E0131 06:44:53.602833 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:53 crc kubenswrapper[4687]: E0131 06:44:53.603003 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.603200 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.603294 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:53 crc kubenswrapper[4687]: E0131 06:44:53.603572 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:53 crc kubenswrapper[4687]: E0131 06:44:53.603754 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.618007 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:59:46.276995113 +0000 UTC Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.618984 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 06:44:53 crc kubenswrapper[4687]: I0131 06:44:53.629967 4687 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 06:44:54 crc kubenswrapper[4687]: I0131 06:44:54.188325 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" event={"ID":"365b9b91-de3f-4b99-9474-29ed95c2a985","Type":"ContainerStarted","Data":"317743772b7d3d6c31a1644cdede5a12efa1eaaba28db4e038a9862f83f6c954"} Jan 31 06:44:55 crc kubenswrapper[4687]: E0131 06:44:55.566006 4687 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 06:44:55 crc kubenswrapper[4687]: I0131 06:44:55.603088 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:55 crc kubenswrapper[4687]: I0131 06:44:55.603108 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:55 crc kubenswrapper[4687]: I0131 06:44:55.603108 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:55 crc kubenswrapper[4687]: I0131 06:44:55.603198 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:55 crc kubenswrapper[4687]: E0131 06:44:55.604560 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:55 crc kubenswrapper[4687]: E0131 06:44:55.604623 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:55 crc kubenswrapper[4687]: E0131 06:44:55.604702 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:55 crc kubenswrapper[4687]: E0131 06:44:55.604763 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:55 crc kubenswrapper[4687]: E0131 06:44:55.947160 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 06:44:57 crc kubenswrapper[4687]: I0131 06:44:57.603501 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:57 crc kubenswrapper[4687]: I0131 06:44:57.603559 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:57 crc kubenswrapper[4687]: I0131 06:44:57.603518 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:57 crc kubenswrapper[4687]: I0131 06:44:57.603630 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:57 crc kubenswrapper[4687]: E0131 06:44:57.603679 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:57 crc kubenswrapper[4687]: E0131 06:44:57.603770 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:57 crc kubenswrapper[4687]: E0131 06:44:57.603826 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:57 crc kubenswrapper[4687]: E0131 06:44:57.603962 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:44:59 crc kubenswrapper[4687]: I0131 06:44:59.602556 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:44:59 crc kubenswrapper[4687]: I0131 06:44:59.602632 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:44:59 crc kubenswrapper[4687]: I0131 06:44:59.602563 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:44:59 crc kubenswrapper[4687]: E0131 06:44:59.602710 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:44:59 crc kubenswrapper[4687]: I0131 06:44:59.602632 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:44:59 crc kubenswrapper[4687]: E0131 06:44:59.602810 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:44:59 crc kubenswrapper[4687]: E0131 06:44:59.603013 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:44:59 crc kubenswrapper[4687]: E0131 06:44:59.603160 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:00 crc kubenswrapper[4687]: E0131 06:45:00.949136 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 06:45:01 crc kubenswrapper[4687]: I0131 06:45:01.602868 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:01 crc kubenswrapper[4687]: I0131 06:45:01.602892 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:01 crc kubenswrapper[4687]: I0131 06:45:01.602901 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:01 crc kubenswrapper[4687]: I0131 06:45:01.602919 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:01 crc kubenswrapper[4687]: E0131 06:45:01.603568 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:01 crc kubenswrapper[4687]: E0131 06:45:01.603646 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:01 crc kubenswrapper[4687]: E0131 06:45:01.603685 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:01 crc kubenswrapper[4687]: E0131 06:45:01.603732 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.214942 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/1.log" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.216016 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/0.log" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.216114 4687 generic.go:334] "Generic (PLEG): container finished" podID="96c21054-65ed-4db4-969f-bbb10f612772" containerID="f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562" exitCode=1 Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.216178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77mzd" event={"ID":"96c21054-65ed-4db4-969f-bbb10f612772","Type":"ContainerDied","Data":"f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562"} Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.216248 4687 scope.go:117] "RemoveContainer" containerID="8b0c8c9d9fdc4a7bfb17400811804b29b5fa2d46de143d8320fea404283db9c7" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.216680 4687 scope.go:117] "RemoveContainer" containerID="f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562" Jan 31 06:45:03 crc kubenswrapper[4687]: E0131 06:45:03.216874 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-77mzd_openshift-multus(96c21054-65ed-4db4-969f-bbb10f612772)\"" pod="openshift-multus/multus-77mzd" podUID="96c21054-65ed-4db4-969f-bbb10f612772" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.233615 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwq4j" podStartSLOduration=96.233596223 podStartE2EDuration="1m36.233596223s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:44:54.205819223 +0000 UTC m=+120.483078868" watchObservedRunningTime="2026-01-31 06:45:03.233596223 +0000 UTC m=+129.510855798" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.602987 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.603073 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:03 crc kubenswrapper[4687]: E0131 06:45:03.603100 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.603139 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:03 crc kubenswrapper[4687]: I0131 06:45:03.603142 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:03 crc kubenswrapper[4687]: E0131 06:45:03.603233 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:03 crc kubenswrapper[4687]: E0131 06:45:03.603284 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:03 crc kubenswrapper[4687]: E0131 06:45:03.603339 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:04 crc kubenswrapper[4687]: I0131 06:45:04.220010 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/1.log" Jan 31 06:45:05 crc kubenswrapper[4687]: I0131 06:45:05.602809 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:05 crc kubenswrapper[4687]: I0131 06:45:05.602933 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:05 crc kubenswrapper[4687]: E0131 06:45:05.602988 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:05 crc kubenswrapper[4687]: I0131 06:45:05.603056 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:05 crc kubenswrapper[4687]: I0131 06:45:05.606140 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:05 crc kubenswrapper[4687]: E0131 06:45:05.606127 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:05 crc kubenswrapper[4687]: E0131 06:45:05.606294 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:05 crc kubenswrapper[4687]: E0131 06:45:05.606482 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:05 crc kubenswrapper[4687]: I0131 06:45:05.607791 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 06:45:05 crc kubenswrapper[4687]: E0131 06:45:05.949638 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 06:45:06 crc kubenswrapper[4687]: I0131 06:45:06.228180 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/3.log" Jan 31 06:45:06 crc kubenswrapper[4687]: I0131 06:45:06.232381 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerStarted","Data":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} Jan 31 06:45:06 crc kubenswrapper[4687]: I0131 06:45:06.232916 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:45:06 crc kubenswrapper[4687]: I0131 06:45:06.264616 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podStartSLOduration=99.264598551 podStartE2EDuration="1m39.264598551s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:06.262872732 +0000 UTC m=+132.540132307" watchObservedRunningTime="2026-01-31 06:45:06.264598551 +0000 UTC m=+132.541858126" Jan 31 06:45:06 crc kubenswrapper[4687]: I0131 06:45:06.590098 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hbxj7"] Jan 31 06:45:06 crc kubenswrapper[4687]: I0131 06:45:06.590557 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:06 crc kubenswrapper[4687]: E0131 06:45:06.590834 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:07 crc kubenswrapper[4687]: I0131 06:45:07.603289 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:07 crc kubenswrapper[4687]: I0131 06:45:07.603338 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:07 crc kubenswrapper[4687]: E0131 06:45:07.603473 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:07 crc kubenswrapper[4687]: E0131 06:45:07.603566 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:07 crc kubenswrapper[4687]: I0131 06:45:07.604312 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:07 crc kubenswrapper[4687]: E0131 06:45:07.604630 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:08 crc kubenswrapper[4687]: I0131 06:45:08.603367 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:08 crc kubenswrapper[4687]: E0131 06:45:08.603553 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:09 crc kubenswrapper[4687]: I0131 06:45:09.603312 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:09 crc kubenswrapper[4687]: I0131 06:45:09.603355 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:09 crc kubenswrapper[4687]: I0131 06:45:09.603312 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:09 crc kubenswrapper[4687]: E0131 06:45:09.603470 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:09 crc kubenswrapper[4687]: E0131 06:45:09.603594 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:09 crc kubenswrapper[4687]: E0131 06:45:09.603648 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:10 crc kubenswrapper[4687]: I0131 06:45:10.602819 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:10 crc kubenswrapper[4687]: E0131 06:45:10.603243 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:10 crc kubenswrapper[4687]: E0131 06:45:10.951055 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 06:45:11 crc kubenswrapper[4687]: I0131 06:45:11.602904 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:11 crc kubenswrapper[4687]: E0131 06:45:11.603066 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:11 crc kubenswrapper[4687]: I0131 06:45:11.603279 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:11 crc kubenswrapper[4687]: E0131 06:45:11.603325 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:11 crc kubenswrapper[4687]: I0131 06:45:11.603435 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:11 crc kubenswrapper[4687]: E0131 06:45:11.603641 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:12 crc kubenswrapper[4687]: I0131 06:45:12.602490 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:12 crc kubenswrapper[4687]: E0131 06:45:12.602679 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:13 crc kubenswrapper[4687]: I0131 06:45:13.603515 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:13 crc kubenswrapper[4687]: I0131 06:45:13.603515 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:13 crc kubenswrapper[4687]: I0131 06:45:13.603606 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:13 crc kubenswrapper[4687]: E0131 06:45:13.603965 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:13 crc kubenswrapper[4687]: E0131 06:45:13.604084 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:13 crc kubenswrapper[4687]: E0131 06:45:13.604199 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:14 crc kubenswrapper[4687]: I0131 06:45:14.603097 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:14 crc kubenswrapper[4687]: E0131 06:45:14.603489 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:14 crc kubenswrapper[4687]: I0131 06:45:14.603776 4687 scope.go:117] "RemoveContainer" containerID="f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562" Jan 31 06:45:15 crc kubenswrapper[4687]: I0131 06:45:15.266188 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/1.log" Jan 31 06:45:15 crc kubenswrapper[4687]: I0131 06:45:15.266534 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77mzd" event={"ID":"96c21054-65ed-4db4-969f-bbb10f612772","Type":"ContainerStarted","Data":"e31d388087fd196fdceaf3057d03a85e5ee6d2d5b7b4e69fde93604b3a82d632"} Jan 31 06:45:15 crc kubenswrapper[4687]: I0131 06:45:15.603273 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:15 crc kubenswrapper[4687]: I0131 06:45:15.603359 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:15 crc kubenswrapper[4687]: E0131 06:45:15.604493 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:15 crc kubenswrapper[4687]: I0131 06:45:15.604506 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:15 crc kubenswrapper[4687]: E0131 06:45:15.604624 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:15 crc kubenswrapper[4687]: E0131 06:45:15.604693 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:15 crc kubenswrapper[4687]: E0131 06:45:15.951636 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 06:45:16 crc kubenswrapper[4687]: I0131 06:45:16.603519 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:16 crc kubenswrapper[4687]: E0131 06:45:16.603693 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:17 crc kubenswrapper[4687]: I0131 06:45:17.603630 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:17 crc kubenswrapper[4687]: I0131 06:45:17.603652 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:17 crc kubenswrapper[4687]: E0131 06:45:17.603830 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:17 crc kubenswrapper[4687]: E0131 06:45:17.603969 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:17 crc kubenswrapper[4687]: I0131 06:45:17.603658 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:17 crc kubenswrapper[4687]: E0131 06:45:17.604078 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:18 crc kubenswrapper[4687]: I0131 06:45:18.602592 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:18 crc kubenswrapper[4687]: E0131 06:45:18.602717 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:19 crc kubenswrapper[4687]: I0131 06:45:19.603053 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:19 crc kubenswrapper[4687]: I0131 06:45:19.603129 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:19 crc kubenswrapper[4687]: E0131 06:45:19.603206 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 06:45:19 crc kubenswrapper[4687]: E0131 06:45:19.603288 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 06:45:19 crc kubenswrapper[4687]: I0131 06:45:19.603472 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:19 crc kubenswrapper[4687]: E0131 06:45:19.603696 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 06:45:20 crc kubenswrapper[4687]: I0131 06:45:20.602713 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:20 crc kubenswrapper[4687]: E0131 06:45:20.602844 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbxj7" podUID="dead0f10-2469-49a4-8d26-93fc90d6451d" Jan 31 06:45:21 crc kubenswrapper[4687]: I0131 06:45:21.603643 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:21 crc kubenswrapper[4687]: I0131 06:45:21.603707 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:21 crc kubenswrapper[4687]: I0131 06:45:21.603798 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:21 crc kubenswrapper[4687]: I0131 06:45:21.607905 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 06:45:21 crc kubenswrapper[4687]: I0131 06:45:21.614331 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 06:45:21 crc kubenswrapper[4687]: I0131 06:45:21.614477 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 06:45:21 crc kubenswrapper[4687]: I0131 06:45:21.614499 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 06:45:22 crc kubenswrapper[4687]: I0131 06:45:22.603169 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:22 crc kubenswrapper[4687]: I0131 06:45:22.605817 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 06:45:22 crc kubenswrapper[4687]: I0131 06:45:22.606490 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 06:45:22 crc kubenswrapper[4687]: I0131 06:45:22.971316 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.022689 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bxz2x"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.023984 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.024382 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.025338 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.026117 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.026489 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.027745 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8qhsc"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.028466 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.030167 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.030944 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.036031 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.036030 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.036172 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.036203 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.036550 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.036691 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.037202 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.037392 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.039249 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.039833 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.044108 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.044111 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.044268 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.044290 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.044355 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.044465 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.045564 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kv6zt"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.046022 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.047099 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.047528 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.047601 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.047975 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.048082 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.048093 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.048101 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.048124 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.048167 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.048246 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.050687 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6qn9w"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.051571 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.058323 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.058345 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.058623 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.059486 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.059610 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.059733 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.060009 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.060042 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.060049 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.061769 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.062262 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.062608 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.064507 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-vxbfn"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.065476 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.070400 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zfg87"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.071503 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.076116 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077190 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077208 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077306 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077495 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077530 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077610 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077619 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077678 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077724 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077865 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.078117 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.078122 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.078234 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.078334 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.077870 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.078686 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.078893 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.078913 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.079033 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bb2t2"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.079512 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.080804 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.082987 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.083749 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.084703 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.085063 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.085352 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.089605 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-crdmb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.090111 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.093550 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.093570 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.094074 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.095265 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jrsbk"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.095766 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.096066 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.096594 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.096798 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.097447 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.097788 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.098362 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.098711 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.099703 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.099840 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.100019 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.100138 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.100243 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.101484 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.101932 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kv4b4"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.102167 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.102196 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.102327 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.102463 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.102491 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.103739 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.103899 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.103949 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.104097 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.105814 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.105996 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.106154 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.106860 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.107264 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.107366 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.107463 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.107772 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.107792 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.107909 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.119326 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.122106 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.122392 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.122435 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.123013 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.126859 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zm4ws"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.127018 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.127830 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.128064 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.129518 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-47m2d"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.129622 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.144253 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.146184 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.146860 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.147067 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.148449 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.149749 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.150073 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.150835 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.151187 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-k7lmb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.151861 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.152164 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.159194 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.159387 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.159536 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.159652 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.159827 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.160159 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.162308 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.162452 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.163043 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.163155 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.163474 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.165796 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.166742 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kg8s2"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.166849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.168201 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.168863 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.171248 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.171688 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.173009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.175070 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.180267 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.181633 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.182495 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.184237 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.185789 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c27wp"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.186637 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.187142 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.188111 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bgn6j"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.188933 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.190474 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.190975 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p4g92"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.191317 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.191887 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.191913 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.192203 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.191954 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.192670 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.195273 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bxz2x"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.201740 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.202521 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8qhsc"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.204138 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bb2t2"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.207034 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6qn9w"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.208276 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.209002 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.209983 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210032 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e55636-cd31-4ec5-9e24-2c281d474481-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210069 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210110 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-oauth-serving-cert\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210142 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210168 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-profile-collector-cert\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210197 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-auth-proxy-config\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210231 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5rs\" (UniqueName: \"kubernetes.io/projected/ea6afbaa-a516-45e0-bbd8-199b879e2654-kube-api-access-vf5rs\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210261 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce8ee922-54be-446b-ab92-e5459763496c-metrics-tls\") pod \"dns-operator-744455d44c-47m2d\" (UID: \"ce8ee922-54be-446b-ab92-e5459763496c\") " pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210307 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2b9\" (UniqueName: \"kubernetes.io/projected/f32026ee-35c2-42bc-aa53-14e8ccc5e136-kube-api-access-jt2b9\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210339 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-default-certificate\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210391 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/787fd9f1-90b0-454b-a9cf-2016db70043d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210466 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32026ee-35c2-42bc-aa53-14e8ccc5e136-serving-cert\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3171d3-7275-477b-8c99-cae75ecd914c-config\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210544 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-serving-cert\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210598 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210651 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b987ab3e-a46b-4852-881e-cd84a2f42e26-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.210936 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c641c91-1772-452d-b8e5-e2e917fe0f3e-serving-cert\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211042 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-audit\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211142 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-service-ca-bundle\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211187 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-srv-cert\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211239 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-oauth-config\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211264 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-client-ca\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211289 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abed3680-932f-4c8b-8ff2-3b011b996088-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kg8s2\" (UID: \"abed3680-932f-4c8b-8ff2-3b011b996088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211311 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/702885bb-6915-436f-b925-b4c1e88e5edf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211331 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211368 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx625\" (UniqueName: \"kubernetes.io/projected/e2e4841e-e880-45f4-8769-cd9fea35654e-kube-api-access-qx625\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211426 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-apiservice-cert\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211467 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-tmpfs\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211537 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-config\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211573 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f32026ee-35c2-42bc-aa53-14e8ccc5e136-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211593 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211621 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-config\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211642 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpzp\" (UniqueName: \"kubernetes.io/projected/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-kube-api-access-fvpzp\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211678 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211711 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211743 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k867c\" (UniqueName: \"kubernetes.io/projected/7c641c91-1772-452d-b8e5-e2e917fe0f3e-kube-api-access-k867c\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211780 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-image-import-ca\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211815 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc5kn\" (UniqueName: \"kubernetes.io/projected/5695d1ed-642b-4546-9624-306b27441931-kube-api-access-tc5kn\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211835 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ff64219-76d2-4a04-9932-59f5c1619358-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s56tv\" (UID: \"5ff64219-76d2-4a04-9932-59f5c1619358\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211874 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vxbfn"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.211940 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-config\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212026 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-config\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212087 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-stats-auth\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212116 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8mhg\" (UniqueName: \"kubernetes.io/projected/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-kube-api-access-b8mhg\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212299 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5695d1ed-642b-4546-9624-306b27441931-metrics-tls\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212472 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-serving-cert\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212576 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212634 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-srv-cert\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212743 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-machine-approver-tls\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212786 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212813 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2b80006-d9e1-40e5-becc-5764e747f572-audit-dir\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tsl\" (UniqueName: \"kubernetes.io/projected/e5b7bf80-e0c2-461f-944b-43b00db98f09-kube-api-access-s9tsl\") pod \"downloads-7954f5f757-vxbfn\" (UID: \"e5b7bf80-e0c2-461f-944b-43b00db98f09\") " pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212881 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18165c42-63ba-4c65-8ba7-f0e205fc74b7-config\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212909 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-client\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212963 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.212999 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213435 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvkt2\" (UniqueName: \"kubernetes.io/projected/ce8ee922-54be-446b-ab92-e5459763496c-kube-api-access-hvkt2\") pod \"dns-operator-744455d44c-47m2d\" (UID: \"ce8ee922-54be-446b-ab92-e5459763496c\") " pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkpz\" (UniqueName: \"kubernetes.io/projected/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-kube-api-access-zbkpz\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213642 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-etcd-client\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vsgwh\" (UID: \"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213843 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x69r9\" (UniqueName: \"kubernetes.io/projected/ba4ee6bf-8298-425c-8603-0816ef6d62a2-kube-api-access-x69r9\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213878 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213912 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f3171d3-7275-477b-8c99-cae75ecd914c-images\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.213992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5695d1ed-642b-4546-9624-306b27441931-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214118 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214229 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-webhook-cert\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214336 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f3a2c39-d679-4b61-affc-eeb451304860-config\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214396 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/702885bb-6915-436f-b925-b4c1e88e5edf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214450 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnc5k\" (UniqueName: \"kubernetes.io/projected/d2b80006-d9e1-40e5-becc-5764e747f572-kube-api-access-hnc5k\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214470 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw88b\" (UniqueName: \"kubernetes.io/projected/4f3a2c39-d679-4b61-affc-eeb451304860-kube-api-access-xw88b\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214490 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86e55636-cd31-4ec5-9e24-2c281d474481-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214507 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxnh\" (UniqueName: \"kubernetes.io/projected/ea0d9432-9215-4303-8914-0b0d4c7e49a8-kube-api-access-jqxnh\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214525 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-audit-policies\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214553 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-service-ca\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214582 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214600 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-config\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214618 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-config\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214644 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214669 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4jt\" (UniqueName: \"kubernetes.io/projected/702885bb-6915-436f-b925-b4c1e88e5edf-kube-api-access-vx4jt\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214687 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r9nk\" (UniqueName: \"kubernetes.io/projected/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-kube-api-access-8r9nk\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.214703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba4ee6bf-8298-425c-8603-0816ef6d62a2-node-pullsecrets\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215129 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2mqr\" (UniqueName: \"kubernetes.io/projected/abed3680-932f-4c8b-8ff2-3b011b996088-kube-api-access-v2mqr\") pod \"multus-admission-controller-857f4d67dd-kg8s2\" (UID: \"abed3680-932f-4c8b-8ff2-3b011b996088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215188 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf92b96c-c1bc-4102-a49b-003d08ef9de7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215214 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-encryption-config\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8qck\" (UniqueName: \"kubernetes.io/projected/787fd9f1-90b0-454b-a9cf-2016db70043d-kube-api-access-t8qck\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215358 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18165c42-63ba-4c65-8ba7-f0e205fc74b7-serving-cert\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215471 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fed9a01f-700b-493d-bb38-7a730dddccb3-secret-volume\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215653 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b987ab3e-a46b-4852-881e-cd84a2f42e26-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-ca\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215710 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-trusted-ca-bundle\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215835 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd2sz\" (UniqueName: \"kubernetes.io/projected/5ff64219-76d2-4a04-9932-59f5c1619358-kube-api-access-dd2sz\") pod \"cluster-samples-operator-665b6dd947-s56tv\" (UID: \"5ff64219-76d2-4a04-9932-59f5c1619358\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.215923 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-metrics-certs\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216120 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/787fd9f1-90b0-454b-a9cf-2016db70043d-proxy-tls\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216181 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kv4b4"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216214 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-etcd-serving-ca\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba4ee6bf-8298-425c-8603-0816ef6d62a2-audit-dir\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216432 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216486 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e4841e-e880-45f4-8769-cd9fea35654e-serving-cert\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216509 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-client-ca\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216642 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctqc\" (UniqueName: \"kubernetes.io/projected/65572f7a-260e-4d12-b9ad-e17f1b17eab4-kube-api-access-7ctqc\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hsr\" (UniqueName: \"kubernetes.io/projected/fed9a01f-700b-493d-bb38-7a730dddccb3-kube-api-access-h2hsr\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216726 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvbgd\" (UniqueName: \"kubernetes.io/projected/b987ab3e-a46b-4852-881e-cd84a2f42e26-kube-api-access-qvbgd\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkkv6\" (UniqueName: \"kubernetes.io/projected/8f3171d3-7275-477b-8c99-cae75ecd914c-kube-api-access-jkkv6\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216814 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56dx\" (UniqueName: \"kubernetes.io/projected/18165c42-63ba-4c65-8ba7-f0e205fc74b7-kube-api-access-d56dx\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216865 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69csf\" (UniqueName: \"kubernetes.io/projected/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-kube-api-access-69csf\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216915 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-key\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216946 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wqtq\" (UniqueName: \"kubernetes.io/projected/175a043a-d6f7-4c39-953b-560986f36646-kube-api-access-5wqtq\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.216981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf92b96c-c1bc-4102-a49b-003d08ef9de7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.217009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-serving-cert\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.217027 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea0d9432-9215-4303-8914-0b0d4c7e49a8-service-ca-bundle\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.217051 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxz88\" (UniqueName: \"kubernetes.io/projected/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-kube-api-access-kxz88\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.217113 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.217143 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-encryption-config\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.217985 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218539 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f3a2c39-d679-4b61-affc-eeb451304860-serving-cert\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218576 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86e55636-cd31-4ec5-9e24-2c281d474481-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218604 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/702885bb-6915-436f-b925-b4c1e88e5edf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218631 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218663 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pskw5\" (UniqueName: \"kubernetes.io/projected/cf92b96c-c1bc-4102-a49b-003d08ef9de7-kube-api-access-pskw5\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218700 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f3171d3-7275-477b-8c99-cae75ecd914c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218762 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-config\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfxmg\" (UniqueName: \"kubernetes.io/projected/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-kube-api-access-gfxmg\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218801 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5695d1ed-642b-4546-9624-306b27441931-trusted-ca\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-config\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218835 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-policies\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218853 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-service-ca\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218868 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-dir\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218906 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18165c42-63ba-4c65-8ba7-f0e205fc74b7-trusted-ca\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218931 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65572f7a-260e-4d12-b9ad-e17f1b17eab4-serving-cert\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-config\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.218960 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wgx\" (UniqueName: \"kubernetes.io/projected/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-kube-api-access-n9wgx\") pod \"control-plane-machine-set-operator-78cbb6b69f-vsgwh\" (UID: \"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.219016 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-etcd-client\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.219053 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.219070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-cabundle\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.219083 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-serving-cert\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.219098 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7rw2\" (UniqueName: \"kubernetes.io/projected/25dca60d-d3da-4a23-b32a-cf4654f6298d-kube-api-access-b7rw2\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.219778 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.221399 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.222504 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.223861 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.225032 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-52dkb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.225759 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-52dkb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.226496 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-crdmb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.227551 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6k67p"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.228646 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.228988 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zfg87"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.230038 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.230560 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.233025 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.233082 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jrsbk"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.234954 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.237169 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.251352 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.251402 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.263398 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.268296 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6k67p"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.268343 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kv6zt"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.268352 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-47m2d"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.269186 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.280550 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.280612 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.282465 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.283697 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c27wp"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.285623 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.289725 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.293614 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zm4ws"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.295901 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bgn6j"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.296854 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.298259 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p4g92"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.299181 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kg8s2"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.300313 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-52dkb"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.301359 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.302554 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.303829 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.306594 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jp9kx"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.306675 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.307792 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.308197 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wz29h"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.308700 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.310203 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jp9kx"] Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.319684 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-etcd-client\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.319874 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.320657 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x69r9\" (UniqueName: \"kubernetes.io/projected/ba4ee6bf-8298-425c-8603-0816ef6d62a2-kube-api-access-x69r9\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.320693 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vsgwh\" (UID: \"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.320759 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.320809 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f3171d3-7275-477b-8c99-cae75ecd914c-images\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.320842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5695d1ed-642b-4546-9624-306b27441931-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.320895 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.320940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-webhook-cert\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.320966 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f3a2c39-d679-4b61-affc-eeb451304860-config\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/702885bb-6915-436f-b925-b4c1e88e5edf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-audit-policies\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnc5k\" (UniqueName: \"kubernetes.io/projected/d2b80006-d9e1-40e5-becc-5764e747f572-kube-api-access-hnc5k\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321125 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw88b\" (UniqueName: \"kubernetes.io/projected/4f3a2c39-d679-4b61-affc-eeb451304860-kube-api-access-xw88b\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321201 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86e55636-cd31-4ec5-9e24-2c281d474481-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321228 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqxnh\" (UniqueName: \"kubernetes.io/projected/ea0d9432-9215-4303-8914-0b0d4c7e49a8-kube-api-access-jqxnh\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321253 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-service-ca\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321260 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-config\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-config\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321330 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321354 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r9nk\" (UniqueName: \"kubernetes.io/projected/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-kube-api-access-8r9nk\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321447 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba4ee6bf-8298-425c-8603-0816ef6d62a2-node-pullsecrets\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321476 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx4jt\" (UniqueName: \"kubernetes.io/projected/702885bb-6915-436f-b925-b4c1e88e5edf-kube-api-access-vx4jt\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321503 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf92b96c-c1bc-4102-a49b-003d08ef9de7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2mqr\" (UniqueName: \"kubernetes.io/projected/abed3680-932f-4c8b-8ff2-3b011b996088-kube-api-access-v2mqr\") pod \"multus-admission-controller-857f4d67dd-kg8s2\" (UID: \"abed3680-932f-4c8b-8ff2-3b011b996088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321554 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-encryption-config\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18165c42-63ba-4c65-8ba7-f0e205fc74b7-serving-cert\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8qck\" (UniqueName: \"kubernetes.io/projected/787fd9f1-90b0-454b-a9cf-2016db70043d-kube-api-access-t8qck\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321673 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-trusted-ca-bundle\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321704 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd2sz\" (UniqueName: \"kubernetes.io/projected/5ff64219-76d2-4a04-9932-59f5c1619358-kube-api-access-dd2sz\") pod \"cluster-samples-operator-665b6dd947-s56tv\" (UID: \"5ff64219-76d2-4a04-9932-59f5c1619358\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321729 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fed9a01f-700b-493d-bb38-7a730dddccb3-secret-volume\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b987ab3e-a46b-4852-881e-cd84a2f42e26-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-ca\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321799 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-metrics-certs\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321824 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/787fd9f1-90b0-454b-a9cf-2016db70043d-proxy-tls\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321832 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f3171d3-7275-477b-8c99-cae75ecd914c-images\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321850 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-etcd-serving-ca\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321894 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e4841e-e880-45f4-8769-cd9fea35654e-serving-cert\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba4ee6bf-8298-425c-8603-0816ef6d62a2-audit-dir\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321936 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321957 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkkv6\" (UniqueName: \"kubernetes.io/projected/8f3171d3-7275-477b-8c99-cae75ecd914c-kube-api-access-jkkv6\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.321978 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56dx\" (UniqueName: \"kubernetes.io/projected/18165c42-63ba-4c65-8ba7-f0e205fc74b7-kube-api-access-d56dx\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-client-ca\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322021 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctqc\" (UniqueName: \"kubernetes.io/projected/65572f7a-260e-4d12-b9ad-e17f1b17eab4-kube-api-access-7ctqc\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322038 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2hsr\" (UniqueName: \"kubernetes.io/projected/fed9a01f-700b-493d-bb38-7a730dddccb3-kube-api-access-h2hsr\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322054 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvbgd\" (UniqueName: \"kubernetes.io/projected/b987ab3e-a46b-4852-881e-cd84a2f42e26-kube-api-access-qvbgd\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322070 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69csf\" (UniqueName: \"kubernetes.io/projected/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-kube-api-access-69csf\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322087 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf92b96c-c1bc-4102-a49b-003d08ef9de7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322098 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-audit-policies\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-key\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322228 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wqtq\" (UniqueName: \"kubernetes.io/projected/175a043a-d6f7-4c39-953b-560986f36646-kube-api-access-5wqtq\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322250 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-encryption-config\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322287 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-serving-cert\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea0d9432-9215-4303-8914-0b0d4c7e49a8-service-ca-bundle\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322319 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxz88\" (UniqueName: \"kubernetes.io/projected/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-kube-api-access-kxz88\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322336 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322354 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322387 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f3a2c39-d679-4b61-affc-eeb451304860-serving-cert\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322584 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-etcd-serving-ca\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.322402 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86e55636-cd31-4ec5-9e24-2c281d474481-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323468 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/702885bb-6915-436f-b925-b4c1e88e5edf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-config\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323505 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pskw5\" (UniqueName: \"kubernetes.io/projected/cf92b96c-c1bc-4102-a49b-003d08ef9de7-kube-api-access-pskw5\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323539 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f3171d3-7275-477b-8c99-cae75ecd914c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-config\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfxmg\" (UniqueName: \"kubernetes.io/projected/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-kube-api-access-gfxmg\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5695d1ed-642b-4546-9624-306b27441931-trusted-ca\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323615 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-policies\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18165c42-63ba-4c65-8ba7-f0e205fc74b7-trusted-ca\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323654 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-service-ca\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323678 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-dir\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323713 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-etcd-client\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323731 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65572f7a-260e-4d12-b9ad-e17f1b17eab4-serving-cert\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-config\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323765 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wgx\" (UniqueName: \"kubernetes.io/projected/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-kube-api-access-n9wgx\") pod \"control-plane-machine-set-operator-78cbb6b69f-vsgwh\" (UID: \"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323780 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323797 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-cabundle\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-serving-cert\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323829 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7rw2\" (UniqueName: \"kubernetes.io/projected/25dca60d-d3da-4a23-b32a-cf4654f6298d-kube-api-access-b7rw2\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323846 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e55636-cd31-4ec5-9e24-2c281d474481-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323897 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-oauth-serving-cert\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-auth-proxy-config\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323933 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5rs\" (UniqueName: \"kubernetes.io/projected/ea6afbaa-a516-45e0-bbd8-199b879e2654-kube-api-access-vf5rs\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323979 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.323996 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-profile-collector-cert\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324036 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce8ee922-54be-446b-ab92-e5459763496c-metrics-tls\") pod \"dns-operator-744455d44c-47m2d\" (UID: \"ce8ee922-54be-446b-ab92-e5459763496c\") " pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324095 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32026ee-35c2-42bc-aa53-14e8ccc5e136-serving-cert\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324112 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2b9\" (UniqueName: \"kubernetes.io/projected/f32026ee-35c2-42bc-aa53-14e8ccc5e136-kube-api-access-jt2b9\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324127 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-default-certificate\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324146 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/787fd9f1-90b0-454b-a9cf-2016db70043d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3171d3-7275-477b-8c99-cae75ecd914c-config\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324181 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-serving-cert\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324226 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b987ab3e-a46b-4852-881e-cd84a2f42e26-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324259 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c641c91-1772-452d-b8e5-e2e917fe0f3e-serving-cert\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324276 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-audit\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-service-ca-bundle\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324403 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-srv-cert\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324468 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324502 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-client-ca\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324518 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-oauth-config\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abed3680-932f-4c8b-8ff2-3b011b996088-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kg8s2\" (UID: \"abed3680-932f-4c8b-8ff2-3b011b996088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324553 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/702885bb-6915-436f-b925-b4c1e88e5edf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324569 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324587 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx625\" (UniqueName: \"kubernetes.io/projected/e2e4841e-e880-45f4-8769-cd9fea35654e-kube-api-access-qx625\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-apiservice-cert\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f32026ee-35c2-42bc-aa53-14e8ccc5e136-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324660 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-tmpfs\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-config\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-config\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324721 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324741 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvpzp\" (UniqueName: \"kubernetes.io/projected/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-kube-api-access-fvpzp\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324757 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k867c\" (UniqueName: \"kubernetes.io/projected/7c641c91-1772-452d-b8e5-e2e917fe0f3e-kube-api-access-k867c\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324794 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-image-import-ca\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324830 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ff64219-76d2-4a04-9932-59f5c1619358-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s56tv\" (UID: \"5ff64219-76d2-4a04-9932-59f5c1619358\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc5kn\" (UniqueName: \"kubernetes.io/projected/5695d1ed-642b-4546-9624-306b27441931-kube-api-access-tc5kn\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324878 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-config\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324929 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-config\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.324947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-stats-auth\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8mhg\" (UniqueName: \"kubernetes.io/projected/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-kube-api-access-b8mhg\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325044 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5695d1ed-642b-4546-9624-306b27441931-metrics-tls\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325063 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-serving-cert\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-srv-cert\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325110 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2b80006-d9e1-40e5-becc-5764e747f572-audit-dir\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325125 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-machine-approver-tls\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325144 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325163 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tsl\" (UniqueName: \"kubernetes.io/projected/e5b7bf80-e0c2-461f-944b-43b00db98f09-kube-api-access-s9tsl\") pod \"downloads-7954f5f757-vxbfn\" (UID: \"e5b7bf80-e0c2-461f-944b-43b00db98f09\") " pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325181 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18165c42-63ba-4c65-8ba7-f0e205fc74b7-config\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325206 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-client\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325284 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325307 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-config\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325556 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.326169 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b987ab3e-a46b-4852-881e-cd84a2f42e26-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.326248 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba4ee6bf-8298-425c-8603-0816ef6d62a2-node-pullsecrets\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.326742 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba4ee6bf-8298-425c-8603-0816ef6d62a2-audit-dir\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.326878 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-config\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.326972 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-trusted-ca-bundle\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.327157 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf92b96c-c1bc-4102-a49b-003d08ef9de7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.327275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-etcd-client\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.327286 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-ca\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.327711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2b80006-d9e1-40e5-becc-5764e747f572-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.327906 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-audit\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.325430 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.328264 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvkt2\" (UniqueName: \"kubernetes.io/projected/ce8ee922-54be-446b-ab92-e5459763496c-kube-api-access-hvkt2\") pod \"dns-operator-744455d44c-47m2d\" (UID: \"ce8ee922-54be-446b-ab92-e5459763496c\") " pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.328292 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkpz\" (UniqueName: \"kubernetes.io/projected/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-kube-api-access-zbkpz\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.328323 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f32026ee-35c2-42bc-aa53-14e8ccc5e136-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.328544 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-tmpfs\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.328637 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.328859 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.329468 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-encryption-config\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.329577 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2b80006-d9e1-40e5-becc-5764e747f572-audit-dir\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.329810 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-service-ca-bundle\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.330669 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.333134 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.333328 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-client-ca\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.333494 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-serving-cert\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.333778 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e4841e-e880-45f4-8769-cd9fea35654e-serving-cert\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.334319 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba4ee6bf-8298-425c-8603-0816ef6d62a2-image-import-ca\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.335188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-auth-proxy-config\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.335513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.335772 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3171d3-7275-477b-8c99-cae75ecd914c-config\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.336257 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.337200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c641c91-1772-452d-b8e5-e2e917fe0f3e-serving-cert\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.337397 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/787fd9f1-90b0-454b-a9cf-2016db70043d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.337561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-config\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.337797 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.337930 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf92b96c-c1bc-4102-a49b-003d08ef9de7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.338065 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-encryption-config\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.338142 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.338303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-dir\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.338680 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-config\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.338844 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-policies\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.338952 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.339688 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-serving-cert\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.340259 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.340323 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5695d1ed-642b-4546-9624-306b27441931-trusted-ca\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.340626 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.340629 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32026ee-35c2-42bc-aa53-14e8ccc5e136-serving-cert\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.340720 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b987ab3e-a46b-4852-881e-cd84a2f42e26-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.341177 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ff64219-76d2-4a04-9932-59f5c1619358-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s56tv\" (UID: \"5ff64219-76d2-4a04-9932-59f5c1619358\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.341254 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.341443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.341719 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5695d1ed-642b-4546-9624-306b27441931-metrics-tls\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.341962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-config\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.342797 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18165c42-63ba-4c65-8ba7-f0e205fc74b7-serving-cert\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.342830 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.343060 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-machine-approver-tls\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.343848 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65572f7a-260e-4d12-b9ad-e17f1b17eab4-serving-cert\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.346334 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.348333 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-console-oauth-config\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.348685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18165c42-63ba-4c65-8ba7-f0e205fc74b7-config\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.349693 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-config\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.351045 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-service-ca\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.351656 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-client-ca\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.351837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-config\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.352418 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b80006-d9e1-40e5-becc-5764e747f572-serving-cert\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.352608 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18165c42-63ba-4c65-8ba7-f0e205fc74b7-trusted-ca\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.352712 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-oauth-serving-cert\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.354159 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-client\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.354864 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f3171d3-7275-477b-8c99-cae75ecd914c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.357288 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-serving-cert\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.362199 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba4ee6bf-8298-425c-8603-0816ef6d62a2-etcd-client\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.368823 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.386288 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.398859 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86e55636-cd31-4ec5-9e24-2c281d474481-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.413238 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.421205 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/702885bb-6915-436f-b925-b4c1e88e5edf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.426581 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.432807 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e55636-cd31-4ec5-9e24-2c281d474481-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.445800 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.477217 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.485642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/702885bb-6915-436f-b925-b4c1e88e5edf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.486761 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.497243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c641c91-1772-452d-b8e5-e2e917fe0f3e-etcd-service-ca\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.505367 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.526138 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.546375 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.552639 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.566112 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.570314 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.586107 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.606669 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.625655 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.646528 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.667112 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.675124 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.690526 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.707538 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.727317 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.729046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-config\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.746942 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.760649 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce8ee922-54be-446b-ab92-e5459763496c-metrics-tls\") pod \"dns-operator-744455d44c-47m2d\" (UID: \"ce8ee922-54be-446b-ab92-e5459763496c\") " pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.766458 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.786212 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.807218 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.826186 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.846533 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.866029 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.885576 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.892358 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.906262 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.914703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-config\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.925800 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.946461 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 06:45:23 crc kubenswrapper[4687]: I0131 06:45:23.965766 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.006812 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.013803 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-stats-auth\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.025553 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.032758 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-metrics-certs\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.047465 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.066269 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.078579 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ea0d9432-9215-4303-8914-0b0d4c7e49a8-default-certificate\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.087308 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.107073 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.119521 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea0d9432-9215-4303-8914-0b0d4c7e49a8-service-ca-bundle\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.126501 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.130668 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/787fd9f1-90b0-454b-a9cf-2016db70043d-proxy-tls\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.147229 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.164340 4687 request.go:700] Waited for 1.000626871s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.166474 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.187343 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.207014 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.213467 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-srv-cert\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.226115 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.236708 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fed9a01f-700b-493d-bb38-7a730dddccb3-secret-volume\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.236961 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-profile-collector-cert\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.237044 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.245925 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.265968 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.270928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-apiservice-cert\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.274646 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-webhook-cert\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.286053 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.292240 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abed3680-932f-4c8b-8ff2-3b011b996088-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kg8s2\" (UID: \"abed3680-932f-4c8b-8ff2-3b011b996088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.307290 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.321927 4687 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.322017 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f3a2c39-d679-4b61-affc-eeb451304860-config podName:4f3a2c39-d679-4b61-affc-eeb451304860 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.821998353 +0000 UTC m=+151.099257928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f3a2c39-d679-4b61-affc-eeb451304860-config") pod "service-ca-operator-777779d784-p4g92" (UID: "4f3a2c39-d679-4b61-affc-eeb451304860") : failed to sync configmap cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.321929 4687 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.322142 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-control-plane-machine-set-operator-tls podName:ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.822118556 +0000 UTC m=+151.099378131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-vsgwh" (UID: "ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef") : failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.322164 4687 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.322188 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-key podName:dadcab3b-fc57-4a2f-b680-09fc1d6b1dff nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.822180998 +0000 UTC m=+151.099440573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-key") pod "service-ca-9c57cc56f-bgn6j" (UID: "dadcab3b-fc57-4a2f-b680-09fc1d6b1dff") : failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.327806 4687 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.327880 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca podName:175a043a-d6f7-4c39-953b-560986f36646 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.82785921 +0000 UTC m=+151.105118785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca") pod "marketplace-operator-79b997595-c27wp" (UID: "175a043a-d6f7-4c39-953b-560986f36646") : failed to sync configmap cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.328159 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.329000 4687 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.329035 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics podName:175a043a-d6f7-4c39-953b-560986f36646 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.829025653 +0000 UTC m=+151.106285228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics") pod "marketplace-operator-79b997595-c27wp" (UID: "175a043a-d6f7-4c39-953b-560986f36646") : failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.331503 4687 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.331573 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-cabundle podName:dadcab3b-fc57-4a2f-b680-09fc1d6b1dff nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.831555356 +0000 UTC m=+151.108814931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-cabundle") pod "service-ca-9c57cc56f-bgn6j" (UID: "dadcab3b-fc57-4a2f-b680-09fc1d6b1dff") : failed to sync configmap cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.331583 4687 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.331603 4687 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.331629 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f3a2c39-d679-4b61-affc-eeb451304860-serving-cert podName:4f3a2c39-d679-4b61-affc-eeb451304860 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.831614237 +0000 UTC m=+151.108873812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f3a2c39-d679-4b61-affc-eeb451304860-serving-cert") pod "service-ca-operator-777779d784-p4g92" (UID: "4f3a2c39-d679-4b61-affc-eeb451304860") : failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.331651 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-srv-cert podName:25dca60d-d3da-4a23-b32a-cf4654f6298d nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.831641828 +0000 UTC m=+151.108901403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-srv-cert") pod "olm-operator-6b444d44fb-m6962" (UID: "25dca60d-d3da-4a23-b32a-cf4654f6298d") : failed to sync secret cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.332399 4687 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: E0131 06:45:24.332461 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume podName:fed9a01f-700b-493d-bb38-7a730dddccb3 nodeName:}" failed. No retries permitted until 2026-01-31 06:45:24.832452951 +0000 UTC m=+151.109712526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume") pod "collect-profiles-29497365-4d98l" (UID: "fed9a01f-700b-493d-bb38-7a730dddccb3") : failed to sync configmap cache: timed out waiting for the condition Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.347315 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.367451 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.385805 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.405522 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.431221 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.447046 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.466324 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.486386 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.507140 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.526803 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.546935 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.566764 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.587245 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.606186 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.627223 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.647075 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.666530 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.686634 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.707213 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.725862 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.746551 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.767040 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.786059 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.826795 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.847018 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854039 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-cabundle\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854105 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854175 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-srv-cert\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854347 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vsgwh\" (UID: \"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854393 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f3a2c39-d679-4b61-affc-eeb451304860-config\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-key\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.854643 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f3a2c39-d679-4b61-affc-eeb451304860-serving-cert\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.855282 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f3a2c39-d679-4b61-affc-eeb451304860-config\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.856201 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.856241 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-cabundle\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.856364 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.857870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25dca60d-d3da-4a23-b32a-cf4654f6298d-srv-cert\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.857958 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.858262 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-signing-key\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.858800 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f3a2c39-d679-4b61-affc-eeb451304860-serving-cert\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.860074 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vsgwh\" (UID: \"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.866696 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.885927 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.905571 4687 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.926565 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.946705 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.965781 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 06:45:24 crc kubenswrapper[4687]: I0131 06:45:24.985654 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.005610 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.026246 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.046815 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.066703 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.104133 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x69r9\" (UniqueName: \"kubernetes.io/projected/ba4ee6bf-8298-425c-8603-0816ef6d62a2-kube-api-access-x69r9\") pod \"apiserver-76f77b778f-bxz2x\" (UID: \"ba4ee6bf-8298-425c-8603-0816ef6d62a2\") " pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.124189 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw88b\" (UniqueName: \"kubernetes.io/projected/4f3a2c39-d679-4b61-affc-eeb451304860-kube-api-access-xw88b\") pod \"service-ca-operator-777779d784-p4g92\" (UID: \"4f3a2c39-d679-4b61-affc-eeb451304860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.141242 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.147017 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5695d1ed-642b-4546-9624-306b27441931-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.163214 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cecb279e-a3b6-4860-9afe-62cf3eeb2e9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gbcr5\" (UID: \"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.164892 4687 request.go:700] Waited for 1.842663458s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.179398 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.183892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnc5k\" (UniqueName: \"kubernetes.io/projected/d2b80006-d9e1-40e5-becc-5764e747f572-kube-api-access-hnc5k\") pod \"apiserver-7bbb656c7d-q9tfw\" (UID: \"d2b80006-d9e1-40e5-becc-5764e747f572\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.200035 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.200745 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2mqr\" (UniqueName: \"kubernetes.io/projected/abed3680-932f-4c8b-8ff2-3b011b996088-kube-api-access-v2mqr\") pod \"multus-admission-controller-857f4d67dd-kg8s2\" (UID: \"abed3680-932f-4c8b-8ff2-3b011b996088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.228323 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvbgd\" (UniqueName: \"kubernetes.io/projected/b987ab3e-a46b-4852-881e-cd84a2f42e26-kube-api-access-qvbgd\") pod \"kube-storage-version-migrator-operator-b67b599dd-zkmnb\" (UID: \"b987ab3e-a46b-4852-881e-cd84a2f42e26\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.242173 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.244601 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctqc\" (UniqueName: \"kubernetes.io/projected/65572f7a-260e-4d12-b9ad-e17f1b17eab4-kube-api-access-7ctqc\") pod \"route-controller-manager-6576b87f9c-fc67z\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.267799 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2hsr\" (UniqueName: \"kubernetes.io/projected/fed9a01f-700b-493d-bb38-7a730dddccb3-kube-api-access-h2hsr\") pod \"collect-profiles-29497365-4d98l\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.280032 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxz88\" (UniqueName: \"kubernetes.io/projected/dadcab3b-fc57-4a2f-b680-09fc1d6b1dff-kube-api-access-kxz88\") pod \"service-ca-9c57cc56f-bgn6j\" (UID: \"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.292883 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.300790 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.314958 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69csf\" (UniqueName: \"kubernetes.io/projected/f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b-kube-api-access-69csf\") pod \"machine-approver-56656f9798-zbgss\" (UID: \"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.322478 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqxnh\" (UniqueName: \"kubernetes.io/projected/ea0d9432-9215-4303-8914-0b0d4c7e49a8-kube-api-access-jqxnh\") pod \"router-default-5444994796-k7lmb\" (UID: \"ea0d9432-9215-4303-8914-0b0d4c7e49a8\") " pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.344567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wqtq\" (UniqueName: \"kubernetes.io/projected/175a043a-d6f7-4c39-953b-560986f36646-kube-api-access-5wqtq\") pod \"marketplace-operator-79b997595-c27wp\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.355054 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.365115 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r9nk\" (UniqueName: \"kubernetes.io/projected/fcc4167b-60df-4666-b1b5-dc5ea87b7f6e-kube-api-access-8r9nk\") pod \"authentication-operator-69f744f599-bb2t2\" (UID: \"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.395698 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx4jt\" (UniqueName: \"kubernetes.io/projected/702885bb-6915-436f-b925-b4c1e88e5edf-kube-api-access-vx4jt\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.407507 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8qck\" (UniqueName: \"kubernetes.io/projected/787fd9f1-90b0-454b-a9cf-2016db70043d-kube-api-access-t8qck\") pod \"machine-config-controller-84d6567774-4qxbh\" (UID: \"787fd9f1-90b0-454b-a9cf-2016db70043d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.420939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k867c\" (UniqueName: \"kubernetes.io/projected/7c641c91-1772-452d-b8e5-e2e917fe0f3e-kube-api-access-k867c\") pod \"etcd-operator-b45778765-kv4b4\" (UID: \"7c641c91-1772-452d-b8e5-e2e917fe0f3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.439828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkkv6\" (UniqueName: \"kubernetes.io/projected/8f3171d3-7275-477b-8c99-cae75ecd914c-kube-api-access-jkkv6\") pod \"machine-api-operator-5694c8668f-kv6zt\" (UID: \"8f3171d3-7275-477b-8c99-cae75ecd914c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.462471 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.464206 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx625\" (UniqueName: \"kubernetes.io/projected/e2e4841e-e880-45f4-8769-cd9fea35654e-kube-api-access-qx625\") pod \"controller-manager-879f6c89f-8qhsc\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.482741 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.484916 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.488984 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.489905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd2sz\" (UniqueName: \"kubernetes.io/projected/5ff64219-76d2-4a04-9932-59f5c1619358-kube-api-access-dd2sz\") pod \"cluster-samples-operator-665b6dd947-s56tv\" (UID: \"5ff64219-76d2-4a04-9932-59f5c1619358\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.490922 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bxz2x"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.494203 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.497325 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcecb279e_a3b6_4860_9afe_62cf3eeb2e9c.slice/crio-7bb420987c83be9e6ff3a1df2e32fafe2e23982522622b7bb8736bf382887482 WatchSource:0}: Error finding container 7bb420987c83be9e6ff3a1df2e32fafe2e23982522622b7bb8736bf382887482: Status 404 returned error can't find the container with id 7bb420987c83be9e6ff3a1df2e32fafe2e23982522622b7bb8736bf382887482 Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.500174 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b80006_d9e1_40e5_becc_5764e747f572.slice/crio-3e514cc6f31b2e65d2d3d5945c1c66f2e50e96e125e0b8f662f06f74905962aa WatchSource:0}: Error finding container 3e514cc6f31b2e65d2d3d5945c1c66f2e50e96e125e0b8f662f06f74905962aa: Status 404 returned error can't find the container with id 3e514cc6f31b2e65d2d3d5945c1c66f2e50e96e125e0b8f662f06f74905962aa Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.501502 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf112d0fe_1fbc_4892_b6ff_81ab1edfcb0b.slice/crio-1415a4586393eaf319079c847c1de430b2866227f5786c73e887f41b08c405c1 WatchSource:0}: Error finding container 1415a4586393eaf319079c847c1de430b2866227f5786c73e887f41b08c405c1: Status 404 returned error can't find the container with id 1415a4586393eaf319079c847c1de430b2866227f5786c73e887f41b08c405c1 Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.502938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvpzp\" (UniqueName: \"kubernetes.io/projected/c1b4bdad-f662-48bd-b1ae-1a9916973b8b-kube-api-access-fvpzp\") pod \"console-f9d7485db-crdmb\" (UID: \"c1b4bdad-f662-48bd-b1ae-1a9916973b8b\") " pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.503005 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba4ee6bf_8298_425c_8603_0816ef6d62a2.slice/crio-fab790b1c3636419673f7f5eb77a11e7d5b141f2de017e14ab37b6c99a949e3b WatchSource:0}: Error finding container fab790b1c3636419673f7f5eb77a11e7d5b141f2de017e14ab37b6c99a949e3b: Status 404 returned error can't find the container with id fab790b1c3636419673f7f5eb77a11e7d5b141f2de017e14ab37b6c99a949e3b Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.507921 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.515123 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.520226 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.521069 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/702885bb-6915-436f-b925-b4c1e88e5edf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2cqlg\" (UID: \"702885bb-6915-436f-b925-b4c1e88e5edf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.544581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8mhg\" (UniqueName: \"kubernetes.io/projected/7ebcd1d8-3a13-4a8c-859b-b1d8351883ef-kube-api-access-b8mhg\") pod \"catalog-operator-68c6474976-pzw56\" (UID: \"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.561250 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56dx\" (UniqueName: \"kubernetes.io/projected/18165c42-63ba-4c65-8ba7-f0e205fc74b7-kube-api-access-d56dx\") pod \"console-operator-58897d9998-jrsbk\" (UID: \"18165c42-63ba-4c65-8ba7-f0e205fc74b7\") " pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.562853 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.570536 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.580196 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p4g92"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.590053 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86e55636-cd31-4ec5-9e24-2c281d474481-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wfgl2\" (UID: \"86e55636-cd31-4ec5-9e24-2c281d474481\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.594617 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.600230 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.601877 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.611603 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc5kn\" (UniqueName: \"kubernetes.io/projected/5695d1ed-642b-4546-9624-306b27441931-kube-api-access-tc5kn\") pod \"ingress-operator-5b745b69d9-xxvkh\" (UID: \"5695d1ed-642b-4546-9624-306b27441931\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.615879 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.617348 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kg8s2"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.624574 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tsl\" (UniqueName: \"kubernetes.io/projected/e5b7bf80-e0c2-461f-944b-43b00db98f09-kube-api-access-s9tsl\") pod \"downloads-7954f5f757-vxbfn\" (UID: \"e5b7bf80-e0c2-461f-944b-43b00db98f09\") " pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.633757 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.647072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvkt2\" (UniqueName: \"kubernetes.io/projected/ce8ee922-54be-446b-ab92-e5459763496c-kube-api-access-hvkt2\") pod \"dns-operator-744455d44c-47m2d\" (UID: \"ce8ee922-54be-446b-ab92-e5459763496c\") " pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.648362 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f3a2c39_d679_4b61_affc_eeb451304860.slice/crio-28fb0d81b7157a3ece7c4c98f80ecefb628eb70580e8db03d0bf3f5d77744ba1 WatchSource:0}: Error finding container 28fb0d81b7157a3ece7c4c98f80ecefb628eb70580e8db03d0bf3f5d77744ba1: Status 404 returned error can't find the container with id 28fb0d81b7157a3ece7c4c98f80ecefb628eb70580e8db03d0bf3f5d77744ba1 Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.661753 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.665520 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkpz\" (UniqueName: \"kubernetes.io/projected/04efc7d0-c0f8-44ee-ac0e-5289f770f39e-kube-api-access-zbkpz\") pod \"packageserver-d55dfcdfc-q6qrp\" (UID: \"04efc7d0-c0f8-44ee-ac0e-5289f770f39e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.669322 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.679394 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabed3680_932f_4c8b_8ff2_3b011b996088.slice/crio-d396c8050648356b57ab98679ba22d2ecd16378d1e593aff41716cc33258e086 WatchSource:0}: Error finding container d396c8050648356b57ab98679ba22d2ecd16378d1e593aff41716cc33258e086: Status 404 returned error can't find the container with id d396c8050648356b57ab98679ba22d2ecd16378d1e593aff41716cc33258e086 Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.681675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c58d0a19-a26d-4bb4-a46a-4bffe9491a99-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hsxqb\" (UID: \"c58d0a19-a26d-4bb4-a46a-4bffe9491a99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.684529 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfed9a01f_700b_493d_bb38_7a730dddccb3.slice/crio-d58a3fda4ce0f2345afeaa6dd3e628cf4c1c9c06cce53c995f0bc7d2c2ce96dc WatchSource:0}: Error finding container d58a3fda4ce0f2345afeaa6dd3e628cf4c1c9c06cce53c995f0bc7d2c2ce96dc: Status 404 returned error can't find the container with id d58a3fda4ce0f2345afeaa6dd3e628cf4c1c9c06cce53c995f0bc7d2c2ce96dc Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.685471 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.698065 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.700727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7rw2\" (UniqueName: \"kubernetes.io/projected/25dca60d-d3da-4a23-b32a-cf4654f6298d-kube-api-access-b7rw2\") pod \"olm-operator-6b444d44fb-m6962\" (UID: \"25dca60d-d3da-4a23-b32a-cf4654f6298d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.708579 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.724972 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pskw5\" (UniqueName: \"kubernetes.io/projected/cf92b96c-c1bc-4102-a49b-003d08ef9de7-kube-api-access-pskw5\") pod \"openshift-apiserver-operator-796bbdcf4f-qjjpt\" (UID: \"cf92b96c-c1bc-4102-a49b-003d08ef9de7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.734088 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8qhsc"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.742307 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5rs\" (UniqueName: \"kubernetes.io/projected/ea6afbaa-a516-45e0-bbd8-199b879e2654-kube-api-access-vf5rs\") pod \"oauth-openshift-558db77b4-6qn9w\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.760226 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.764316 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wgx\" (UniqueName: \"kubernetes.io/projected/ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef-kube-api-access-n9wgx\") pod \"control-plane-machine-set-operator-78cbb6b69f-vsgwh\" (UID: \"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.782670 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e4841e_e880_45f4_8769_cd9fea35654e.slice/crio-d0c64483a4d7a502db042097e6ee4f877efc5354e3a1ec89823097f9eb096e78 WatchSource:0}: Error finding container d0c64483a4d7a502db042097e6ee4f877efc5354e3a1ec89823097f9eb096e78: Status 404 returned error can't find the container with id d0c64483a4d7a502db042097e6ee4f877efc5354e3a1ec89823097f9eb096e78 Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.783934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2b9\" (UniqueName: \"kubernetes.io/projected/f32026ee-35c2-42bc-aa53-14e8ccc5e136-kube-api-access-jt2b9\") pod \"openshift-config-operator-7777fb866f-zfg87\" (UID: \"f32026ee-35c2-42bc-aa53-14e8ccc5e136\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.786560 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" Jan 31 06:45:25 crc kubenswrapper[4687]: W0131 06:45:25.790299 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65572f7a_260e_4d12_b9ad_e17f1b17eab4.slice/crio-da4ffea02b55d3dc10851789a38cd8537cb690bd84dea778e25520a890a62c07 WatchSource:0}: Error finding container da4ffea02b55d3dc10851789a38cd8537cb690bd84dea778e25520a890a62c07: Status 404 returned error can't find the container with id da4ffea02b55d3dc10851789a38cd8537cb690bd84dea778e25520a890a62c07 Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.793882 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.801324 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfxmg\" (UniqueName: \"kubernetes.io/projected/8766ce0a-e289-4861-9b0a-9b9ad7c0e623-kube-api-access-gfxmg\") pod \"openshift-controller-manager-operator-756b6f6bc6-2nqsr\" (UID: \"8766ce0a-e289-4861-9b0a-9b9ad7c0e623\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.828362 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.830289 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kv6zt"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.836132 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.856927 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.860967 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.870563 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.871560 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.872864 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbpmd\" (UniqueName: \"kubernetes.io/projected/c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73-kube-api-access-kbpmd\") pod \"package-server-manager-789f6589d5-8rhlq\" (UID: \"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.872936 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.873053 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e49c821-a661-46f0-bbce-7cc8366fee3f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: E0131 06:45:25.873233 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.373217775 +0000 UTC m=+152.650477430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.873356 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72129b1e-6186-43d3-9471-a7e9a3f91ffe-images\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.876927 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcf7\" (UniqueName: \"kubernetes.io/projected/72129b1e-6186-43d3-9471-a7e9a3f91ffe-kube-api-access-9zcf7\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-trusted-ca\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877030 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e49c821-a661-46f0-bbce-7cc8366fee3f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877064 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-certificates\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877108 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-bound-sa-token\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877138 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4w66\" (UniqueName: \"kubernetes.io/projected/0e99374e-992a-48aa-b353-0e298dfb0889-kube-api-access-b4w66\") pod \"migrator-59844c95c7-bq5j8\" (UID: \"0e99374e-992a-48aa-b353-0e298dfb0889\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877174 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72129b1e-6186-43d3-9471-a7e9a3f91ffe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877208 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72129b1e-6186-43d3-9471-a7e9a3f91ffe-proxy-tls\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877231 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-tls\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877257 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8rhlq\" (UID: \"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.877280 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crws\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-kube-api-access-9crws\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.884541 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.886513 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.919992 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c27wp"] Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.923969 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.977769 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:25 crc kubenswrapper[4687]: E0131 06:45:25.977887 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.477861031 +0000 UTC m=+152.755120606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8rhlq\" (UID: \"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978052 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crws\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-kube-api-access-9crws\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978112 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-metrics-tls\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978180 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-plugins-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978246 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbpmd\" (UniqueName: \"kubernetes.io/projected/c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73-kube-api-access-kbpmd\") pod \"package-server-manager-789f6589d5-8rhlq\" (UID: \"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978290 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-csi-data-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978345 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978435 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e49c821-a661-46f0-bbce-7cc8366fee3f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978625 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-config-volume\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978790 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-registration-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978852 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/97f155cf-b9dc-420d-8540-2b03fab31a5e-certs\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72129b1e-6186-43d3-9471-a7e9a3f91ffe-images\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978946 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zcf7\" (UniqueName: \"kubernetes.io/projected/72129b1e-6186-43d3-9471-a7e9a3f91ffe-kube-api-access-9zcf7\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.978971 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-socket-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979045 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtnkg\" (UniqueName: \"kubernetes.io/projected/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-kube-api-access-xtnkg\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979118 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40e06c8e-427c-4de8-b3c9-7a10e83ea115-cert\") pod \"ingress-canary-52dkb\" (UID: \"40e06c8e-427c-4de8-b3c9-7a10e83ea115\") " pod="openshift-ingress-canary/ingress-canary-52dkb" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979141 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-mountpoint-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979175 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-trusted-ca\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979226 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e49c821-a661-46f0-bbce-7cc8366fee3f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-certificates\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979393 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgg8\" (UniqueName: \"kubernetes.io/projected/97f155cf-b9dc-420d-8540-2b03fab31a5e-kube-api-access-bbgg8\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979440 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-bound-sa-token\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979461 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmxnv\" (UniqueName: \"kubernetes.io/projected/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-kube-api-access-kmxnv\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/97f155cf-b9dc-420d-8540-2b03fab31a5e-node-bootstrap-token\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979523 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4w66\" (UniqueName: \"kubernetes.io/projected/0e99374e-992a-48aa-b353-0e298dfb0889-kube-api-access-b4w66\") pod \"migrator-59844c95c7-bq5j8\" (UID: \"0e99374e-992a-48aa-b353-0e298dfb0889\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979590 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72129b1e-6186-43d3-9471-a7e9a3f91ffe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmp8c\" (UniqueName: \"kubernetes.io/projected/40e06c8e-427c-4de8-b3c9-7a10e83ea115-kube-api-access-zmp8c\") pod \"ingress-canary-52dkb\" (UID: \"40e06c8e-427c-4de8-b3c9-7a10e83ea115\") " pod="openshift-ingress-canary/ingress-canary-52dkb" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979739 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72129b1e-6186-43d3-9471-a7e9a3f91ffe-proxy-tls\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.979763 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-tls\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.980355 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-trusted-ca\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.981196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72129b1e-6186-43d3-9471-a7e9a3f91ffe-images\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.982877 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8rhlq\" (UID: \"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:25 crc kubenswrapper[4687]: E0131 06:45:25.983652 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.483633516 +0000 UTC m=+152.760893181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:25 crc kubenswrapper[4687]: I0131 06:45:25.988967 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72129b1e-6186-43d3-9471-a7e9a3f91ffe-proxy-tls\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.007227 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-certificates\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.007761 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72129b1e-6186-43d3-9471-a7e9a3f91ffe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.007867 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e49c821-a661-46f0-bbce-7cc8366fee3f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.010148 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-tls\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.010947 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e49c821-a661-46f0-bbce-7cc8366fee3f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.011767 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.023488 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crws\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-kube-api-access-9crws\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.052349 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbpmd\" (UniqueName: \"kubernetes.io/projected/c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73-kube-api-access-kbpmd\") pod \"package-server-manager-789f6589d5-8rhlq\" (UID: \"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:26 crc kubenswrapper[4687]: W0131 06:45:26.055944 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod175a043a_d6f7_4c39_953b_560986f36646.slice/crio-426e81ebf509272df08f08fd3e88299429a51833f2177ddfbed9160cee4eca3e WatchSource:0}: Error finding container 426e81ebf509272df08f08fd3e88299429a51833f2177ddfbed9160cee4eca3e: Status 404 returned error can't find the container with id 426e81ebf509272df08f08fd3e88299429a51833f2177ddfbed9160cee4eca3e Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.066772 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zcf7\" (UniqueName: \"kubernetes.io/projected/72129b1e-6186-43d3-9471-a7e9a3f91ffe-kube-api-access-9zcf7\") pod \"machine-config-operator-74547568cd-44wfw\" (UID: \"72129b1e-6186-43d3-9471-a7e9a3f91ffe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.078905 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jrsbk"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080027 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bb2t2"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080470 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.080578 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.580554671 +0000 UTC m=+152.857814246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080667 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmp8c\" (UniqueName: \"kubernetes.io/projected/40e06c8e-427c-4de8-b3c9-7a10e83ea115-kube-api-access-zmp8c\") pod \"ingress-canary-52dkb\" (UID: \"40e06c8e-427c-4de8-b3c9-7a10e83ea115\") " pod="openshift-ingress-canary/ingress-canary-52dkb" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080704 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-metrics-tls\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080734 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-plugins-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-csi-data-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080812 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-config-volume\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080840 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-registration-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080856 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/97f155cf-b9dc-420d-8540-2b03fab31a5e-certs\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-socket-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtnkg\" (UniqueName: \"kubernetes.io/projected/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-kube-api-access-xtnkg\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080950 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40e06c8e-427c-4de8-b3c9-7a10e83ea115-cert\") pod \"ingress-canary-52dkb\" (UID: \"40e06c8e-427c-4de8-b3c9-7a10e83ea115\") " pod="openshift-ingress-canary/ingress-canary-52dkb" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.080989 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-mountpoint-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.081020 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbgg8\" (UniqueName: \"kubernetes.io/projected/97f155cf-b9dc-420d-8540-2b03fab31a5e-kube-api-access-bbgg8\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.081019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-plugins-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.081007 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-csi-data-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.081074 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmxnv\" (UniqueName: \"kubernetes.io/projected/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-kube-api-access-kmxnv\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.081092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/97f155cf-b9dc-420d-8540-2b03fab31a5e-node-bootstrap-token\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.081471 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-registration-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.081605 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.581594671 +0000 UTC m=+152.858854246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.081705 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-mountpoint-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.081917 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-socket-dir\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.082068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-bound-sa-token\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.082134 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-config-volume\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.085469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40e06c8e-427c-4de8-b3c9-7a10e83ea115-cert\") pod \"ingress-canary-52dkb\" (UID: \"40e06c8e-427c-4de8-b3c9-7a10e83ea115\") " pod="openshift-ingress-canary/ingress-canary-52dkb" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.099458 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bgn6j"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.100548 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4w66\" (UniqueName: \"kubernetes.io/projected/0e99374e-992a-48aa-b353-0e298dfb0889-kube-api-access-b4w66\") pod \"migrator-59844c95c7-bq5j8\" (UID: \"0e99374e-992a-48aa-b353-0e298dfb0889\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.107450 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.140730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmp8c\" (UniqueName: \"kubernetes.io/projected/40e06c8e-427c-4de8-b3c9-7a10e83ea115-kube-api-access-zmp8c\") pod \"ingress-canary-52dkb\" (UID: \"40e06c8e-427c-4de8-b3c9-7a10e83ea115\") " pod="openshift-ingress-canary/ingress-canary-52dkb" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.149217 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.161859 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmxnv\" (UniqueName: \"kubernetes.io/projected/7c8d3ed7-cfa7-413a-bcd8-585109bab7e7-kube-api-access-kmxnv\") pod \"csi-hostpathplugin-6k67p\" (UID: \"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7\") " pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.178391 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.181665 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.181809 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.68178654 +0000 UTC m=+152.959046115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.181899 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.182232 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.682224683 +0000 UTC m=+152.959484258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.189703 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.195137 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-metrics-tls\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.195930 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtnkg\" (UniqueName: \"kubernetes.io/projected/91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4-kube-api-access-xtnkg\") pod \"dns-default-jp9kx\" (UID: \"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4\") " pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.206885 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-52dkb" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.210408 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/97f155cf-b9dc-420d-8540-2b03fab31a5e-certs\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.210697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/97f155cf-b9dc-420d-8540-2b03fab31a5e-node-bootstrap-token\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:26 crc kubenswrapper[4687]: W0131 06:45:26.218568 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcc4167b_60df_4666_b1b5_dc5ea87b7f6e.slice/crio-c9f978f9b4a1f6fdb4c415f59251d189f585560d0367588296000f11bd5d4911 WatchSource:0}: Error finding container c9f978f9b4a1f6fdb4c415f59251d189f585560d0367588296000f11bd5d4911: Status 404 returned error can't find the container with id c9f978f9b4a1f6fdb4c415f59251d189f585560d0367588296000f11bd5d4911 Jan 31 06:45:26 crc kubenswrapper[4687]: W0131 06:45:26.222188 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18165c42_63ba_4c65_8ba7_f0e205fc74b7.slice/crio-61af504a78146548f475ee5bde7565ad5886f2714fa0311d46a4b0dd1bf0dbff WatchSource:0}: Error finding container 61af504a78146548f475ee5bde7565ad5886f2714fa0311d46a4b0dd1bf0dbff: Status 404 returned error can't find the container with id 61af504a78146548f475ee5bde7565ad5886f2714fa0311d46a4b0dd1bf0dbff Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.227728 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6k67p" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.234388 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:26 crc kubenswrapper[4687]: W0131 06:45:26.240541 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e55636_cd31_4ec5_9e24_2c281d474481.slice/crio-84cb9b92837d223e0888a34fa2439ca01006cdf86684a7eb0bd29f3293bb2dcf WatchSource:0}: Error finding container 84cb9b92837d223e0888a34fa2439ca01006cdf86684a7eb0bd29f3293bb2dcf: Status 404 returned error can't find the container with id 84cb9b92837d223e0888a34fa2439ca01006cdf86684a7eb0bd29f3293bb2dcf Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.283312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.283426 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.783388899 +0000 UTC m=+153.060648474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.283686 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.284006 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.783998847 +0000 UTC m=+153.061258422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.317216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" event={"ID":"86e55636-cd31-4ec5-9e24-2c281d474481","Type":"ContainerStarted","Data":"84cb9b92837d223e0888a34fa2439ca01006cdf86684a7eb0bd29f3293bb2dcf"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.318756 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" event={"ID":"abed3680-932f-4c8b-8ff2-3b011b996088","Type":"ContainerStarted","Data":"d396c8050648356b57ab98679ba22d2ecd16378d1e593aff41716cc33258e086"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.319631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" event={"ID":"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c","Type":"ContainerStarted","Data":"7bb420987c83be9e6ff3a1df2e32fafe2e23982522622b7bb8736bf382887482"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.321145 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" event={"ID":"4f3a2c39-d679-4b61-affc-eeb451304860","Type":"ContainerStarted","Data":"28fb0d81b7157a3ece7c4c98f80ecefb628eb70580e8db03d0bf3f5d77744ba1"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.322527 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" event={"ID":"8f3171d3-7275-477b-8c99-cae75ecd914c","Type":"ContainerStarted","Data":"15c9980533649d3782253da26bca978d61695cfb219fe0a57efe26a1947666bb"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.323631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" event={"ID":"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff","Type":"ContainerStarted","Data":"78657feb3f270b1ede893034c8c49fbaf3476ccb93a24ee2ad2ce6a11b094665"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.325007 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" event={"ID":"65572f7a-260e-4d12-b9ad-e17f1b17eab4","Type":"ContainerStarted","Data":"da4ffea02b55d3dc10851789a38cd8537cb690bd84dea778e25520a890a62c07"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.336522 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" event={"ID":"b987ab3e-a46b-4852-881e-cd84a2f42e26","Type":"ContainerStarted","Data":"5028b2a1460b192004aea8f8b010a3ac295e5800f0c5be44fe515d1b52b247e9"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.339258 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" event={"ID":"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e","Type":"ContainerStarted","Data":"c9f978f9b4a1f6fdb4c415f59251d189f585560d0367588296000f11bd5d4911"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.340298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" event={"ID":"787fd9f1-90b0-454b-a9cf-2016db70043d","Type":"ContainerStarted","Data":"12c66940717d828a973c0c9ba0198cc10976640a0424e74d959757fe9e26b02c"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.341194 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" event={"ID":"e2e4841e-e880-45f4-8769-cd9fea35654e","Type":"ContainerStarted","Data":"d0c64483a4d7a502db042097e6ee4f877efc5354e3a1ec89823097f9eb096e78"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.342451 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" event={"ID":"18165c42-63ba-4c65-8ba7-f0e205fc74b7","Type":"ContainerStarted","Data":"61af504a78146548f475ee5bde7565ad5886f2714fa0311d46a4b0dd1bf0dbff"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.349388 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k7lmb" event={"ID":"ea0d9432-9215-4303-8914-0b0d4c7e49a8","Type":"ContainerStarted","Data":"76eab58a79651e9c865300a558380b76f84c73ea55f8a61bae1e9a8da7de3dbc"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.353028 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" event={"ID":"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b","Type":"ContainerStarted","Data":"1415a4586393eaf319079c847c1de430b2866227f5786c73e887f41b08c405c1"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.360712 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" event={"ID":"ba4ee6bf-8298-425c-8603-0816ef6d62a2","Type":"ContainerStarted","Data":"fab790b1c3636419673f7f5eb77a11e7d5b141f2de017e14ab37b6c99a949e3b"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.362810 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" event={"ID":"d2b80006-d9e1-40e5-becc-5764e747f572","Type":"ContainerStarted","Data":"3e514cc6f31b2e65d2d3d5945c1c66f2e50e96e125e0b8f662f06f74905962aa"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.372498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" event={"ID":"fed9a01f-700b-493d-bb38-7a730dddccb3","Type":"ContainerStarted","Data":"d58a3fda4ce0f2345afeaa6dd3e628cf4c1c9c06cce53c995f0bc7d2c2ce96dc"} Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.374091 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" event={"ID":"175a043a-d6f7-4c39-953b-560986f36646","Type":"ContainerStarted","Data":"426e81ebf509272df08f08fd3e88299429a51833f2177ddfbed9160cee4eca3e"} Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.384747 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.884726711 +0000 UTC m=+153.161986286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.386675 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.386955 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.387372 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.887361636 +0000 UTC m=+153.164621221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.400226 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbgg8\" (UniqueName: \"kubernetes.io/projected/97f155cf-b9dc-420d-8540-2b03fab31a5e-kube-api-access-bbgg8\") pod \"machine-config-server-wz29h\" (UID: \"97f155cf-b9dc-420d-8540-2b03fab31a5e\") " pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.488240 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.488436 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.988391349 +0000 UTC m=+153.265650924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.488657 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.489011 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:26.989004036 +0000 UTC m=+153.266263611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.542795 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wz29h" Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.589760 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.589900 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.089872334 +0000 UTC m=+153.367131909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.590003 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.590337 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.090329757 +0000 UTC m=+153.367589332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.690936 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.691159 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.191128154 +0000 UTC m=+153.468387749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.691271 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.691834 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.191824553 +0000 UTC m=+153.469084138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.784399 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.792341 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.792637 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.292591359 +0000 UTC m=+153.569850934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.792693 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.793346 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.29333655 +0000 UTC m=+153.570596125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.828933 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-crdmb"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.833065 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.836606 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kv4b4"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.885049 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6qn9w"] Jan 31 06:45:26 crc kubenswrapper[4687]: W0131 06:45:26.886172 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5695d1ed_642b_4546_9624_306b27441931.slice/crio-1b0e7dc20b8a65b31af2404b69a208d0df9ceb2faa56465a8825ae44da43b279 WatchSource:0}: Error finding container 1b0e7dc20b8a65b31af2404b69a208d0df9ceb2faa56465a8825ae44da43b279: Status 404 returned error can't find the container with id 1b0e7dc20b8a65b31af2404b69a208d0df9ceb2faa56465a8825ae44da43b279 Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.891949 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.894928 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.895391 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.395373442 +0000 UTC m=+153.672633017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.895451 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zfg87"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.897370 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-47m2d"] Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.903800 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg"] Jan 31 06:45:26 crc kubenswrapper[4687]: W0131 06:45:26.931349 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea6afbaa_a516_45e0_bbd8_199b879e2654.slice/crio-725662c67fc711ecf1cd3bb9936ff031d3b56978cb0b0953d9a1fb82799cfee1 WatchSource:0}: Error finding container 725662c67fc711ecf1cd3bb9936ff031d3b56978cb0b0953d9a1fb82799cfee1: Status 404 returned error can't find the container with id 725662c67fc711ecf1cd3bb9936ff031d3b56978cb0b0953d9a1fb82799cfee1 Jan 31 06:45:26 crc kubenswrapper[4687]: W0131 06:45:26.960324 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1b4bdad_f662_48bd_b1ae_1a9916973b8b.slice/crio-c85f9225b6cbf7a2609b3931a631c3d0ec4e7f1f8116bf43921dcc41e8cf75ab WatchSource:0}: Error finding container c85f9225b6cbf7a2609b3931a631c3d0ec4e7f1f8116bf43921dcc41e8cf75ab: Status 404 returned error can't find the container with id c85f9225b6cbf7a2609b3931a631c3d0ec4e7f1f8116bf43921dcc41e8cf75ab Jan 31 06:45:26 crc kubenswrapper[4687]: I0131 06:45:26.996800 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:26 crc kubenswrapper[4687]: E0131 06:45:26.997133 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.497118555 +0000 UTC m=+153.774378130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.097731 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.098034 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.598004154 +0000 UTC m=+153.875263729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.098227 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.098537 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.598525798 +0000 UTC m=+153.875785363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.123257 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt"] Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.199315 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.699294973 +0000 UTC m=+153.976554538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.199742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.199951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.200363 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.700342463 +0000 UTC m=+153.977602108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.300987 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.301160 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.801131909 +0000 UTC m=+154.078391484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.301611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.303260 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.803242459 +0000 UTC m=+154.080502104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.404717 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.405638 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:27.90562192 +0000 UTC m=+154.182881495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.430485 4687 generic.go:334] "Generic (PLEG): container finished" podID="ba4ee6bf-8298-425c-8603-0816ef6d62a2" containerID="d8041b760b928b7b156ddce316bf170517e620da4bb360a30bba2f7df2ed88a4" exitCode=0 Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.430633 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" event={"ID":"ba4ee6bf-8298-425c-8603-0816ef6d62a2","Type":"ContainerDied","Data":"d8041b760b928b7b156ddce316bf170517e620da4bb360a30bba2f7df2ed88a4"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.440380 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" event={"ID":"cecb279e-a3b6-4860-9afe-62cf3eeb2e9c","Type":"ContainerStarted","Data":"74f537289db51f350ab5043c2d30556757fca0ffe85fe12c4ce74b5fd6e305e3"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.448200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" event={"ID":"cf92b96c-c1bc-4102-a49b-003d08ef9de7","Type":"ContainerStarted","Data":"7131fb2467287eca0c23c6eca0312d288f63f06a5a56a3d866c941268fabe1e4"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.455954 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" event={"ID":"e2e4841e-e880-45f4-8769-cd9fea35654e","Type":"ContainerStarted","Data":"ba1d85580ab924a257e43454bb75eb445d26ea79fdd905a2daf33edcba72c19e"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.457153 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.471900 4687 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8qhsc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.472272 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" podUID="e2e4841e-e880-45f4-8769-cd9fea35654e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.474240 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" event={"ID":"65572f7a-260e-4d12-b9ad-e17f1b17eab4","Type":"ContainerStarted","Data":"ac3ae5422bf890f9d59028d983f7728ae5eadb459b5c6c4efa88116d4de8795b"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.480584 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.490525 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp"] Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.490630 4687 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fc67z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.490661 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" podUID="65572f7a-260e-4d12-b9ad-e17f1b17eab4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.504818 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb"] Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.507132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.510231 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.010216685 +0000 UTC m=+154.287476260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.529128 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" event={"ID":"ce8ee922-54be-446b-ab92-e5459763496c","Type":"ContainerStarted","Data":"16ef1221e651957964c0795f0847f970e962b144ece0e91b9c2f155fae23c8a8"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.531205 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" event={"ID":"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef","Type":"ContainerStarted","Data":"ced6e29415bd10fa28ea89a2573636c950731efd4f670c5038ac1f02a0d65dce"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.532499 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" event={"ID":"ea6afbaa-a516-45e0-bbd8-199b879e2654","Type":"ContainerStarted","Data":"725662c67fc711ecf1cd3bb9936ff031d3b56978cb0b0953d9a1fb82799cfee1"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.547702 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" event={"ID":"702885bb-6915-436f-b925-b4c1e88e5edf","Type":"ContainerStarted","Data":"86b6059309510dfb80545a5a9e74c8d304adf026e2dbdf6d015fb25abeb3e390"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.556165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wz29h" event={"ID":"97f155cf-b9dc-420d-8540-2b03fab31a5e","Type":"ContainerStarted","Data":"d3c281099661b3af96eb01e21350867aa4b3d71d598506c9bf63143f359f3256"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.562379 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" event={"ID":"4f3a2c39-d679-4b61-affc-eeb451304860","Type":"ContainerStarted","Data":"bb7bd4e2e7916a13ae5c8a1a59544bf65d597f94efb9a5783a27f4f6ce2d2c21"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.565674 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" event={"ID":"5695d1ed-642b-4546-9624-306b27441931","Type":"ContainerStarted","Data":"1b0e7dc20b8a65b31af2404b69a208d0df9ceb2faa56465a8825ae44da43b279"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.568425 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" event={"ID":"b987ab3e-a46b-4852-881e-cd84a2f42e26","Type":"ContainerStarted","Data":"e645baeaf27f6d6c17b7ab52b2cd4547395b9bbde94c467e0fa7d9c2024e085c"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.575807 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" event={"ID":"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b","Type":"ContainerStarted","Data":"b9324ce08f3beaf32e28277699e6f5a01c19b93a997f8e5f63264ecaf63dafa8"} Jan 31 06:45:27 crc kubenswrapper[4687]: W0131 06:45:27.576823 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc58d0a19_a26d_4bb4_a46a_4bffe9491a99.slice/crio-d63959d60332691d892f353b4e66b77a3edc93c3f6f2a6b1496fcc50127474f4 WatchSource:0}: Error finding container d63959d60332691d892f353b4e66b77a3edc93c3f6f2a6b1496fcc50127474f4: Status 404 returned error can't find the container with id d63959d60332691d892f353b4e66b77a3edc93c3f6f2a6b1496fcc50127474f4 Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.584745 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" event={"ID":"7c641c91-1772-452d-b8e5-e2e917fe0f3e","Type":"ContainerStarted","Data":"74a68f5ec4c1f92285a7c937a4d519ce10fad4d8260f70cb2f9ee2e9d7fca900"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.587194 4687 generic.go:334] "Generic (PLEG): container finished" podID="d2b80006-d9e1-40e5-becc-5764e747f572" containerID="bce653e7c0b32231e022bc247e914a8b34aaf95897364c7055ef2f72977c070c" exitCode=0 Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.587340 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" event={"ID":"d2b80006-d9e1-40e5-becc-5764e747f572","Type":"ContainerDied","Data":"bce653e7c0b32231e022bc247e914a8b34aaf95897364c7055ef2f72977c070c"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.610327 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.610538 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.110516807 +0000 UTC m=+154.387776382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.611047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.612908 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.112896954 +0000 UTC m=+154.390156529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.651461 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-crdmb" event={"ID":"c1b4bdad-f662-48bd-b1ae-1a9916973b8b","Type":"ContainerStarted","Data":"c85f9225b6cbf7a2609b3931a631c3d0ec4e7f1f8116bf43921dcc41e8cf75ab"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.678181 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k7lmb" event={"ID":"ea0d9432-9215-4303-8914-0b0d4c7e49a8","Type":"ContainerStarted","Data":"b7c027521c3980bf1178d39541ccd00b5e0d6d1c4bf8f68f744ffe042d2c476b"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.685381 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" event={"ID":"f32026ee-35c2-42bc-aa53-14e8ccc5e136","Type":"ContainerStarted","Data":"24d59262a9cabd4b5d2d373d5f0465d7eaa2ae48f659864071587f667f9c9642"} Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.723673 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.724397 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.224381266 +0000 UTC m=+154.501640841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.749930 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56"] Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.792118 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zkmnb" podStartSLOduration=119.792099498 podStartE2EDuration="1m59.792099498s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:27.75431263 +0000 UTC m=+154.031572225" watchObservedRunningTime="2026-01-31 06:45:27.792099498 +0000 UTC m=+154.069359083" Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.799577 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" podStartSLOduration=119.799556751 podStartE2EDuration="1m59.799556751s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:27.796002729 +0000 UTC m=+154.073262324" watchObservedRunningTime="2026-01-31 06:45:27.799556751 +0000 UTC m=+154.076816336" Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.801046 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6k67p"] Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.827033 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.828451 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.328434295 +0000 UTC m=+154.605693950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.901652 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw"] Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.926447 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr"] Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.929076 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.929268 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.42922024 +0000 UTC m=+154.706479815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.929543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:27 crc kubenswrapper[4687]: E0131 06:45:27.929903 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.429884999 +0000 UTC m=+154.707144634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:27 crc kubenswrapper[4687]: I0131 06:45:27.969254 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" podStartSLOduration=120.969238082 podStartE2EDuration="2m0.969238082s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:27.966326569 +0000 UTC m=+154.243586144" watchObservedRunningTime="2026-01-31 06:45:27.969238082 +0000 UTC m=+154.246497657" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.013989 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p4g92" podStartSLOduration=120.013967529 podStartE2EDuration="2m0.013967529s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:28.008844422 +0000 UTC m=+154.286103997" watchObservedRunningTime="2026-01-31 06:45:28.013967529 +0000 UTC m=+154.291227104" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.016112 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq"] Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.033133 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.033322 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.533258879 +0000 UTC m=+154.810518454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.033397 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.033932 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.533921788 +0000 UTC m=+154.811181423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.068261 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-52dkb"] Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.079598 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vxbfn"] Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.095834 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jp9kx"] Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.108851 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" podStartSLOduration=28.108835106 podStartE2EDuration="28.108835106s" podCreationTimestamp="2026-01-31 06:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:28.098988515 +0000 UTC m=+154.376248100" watchObservedRunningTime="2026-01-31 06:45:28.108835106 +0000 UTC m=+154.386094681" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.110087 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962"] Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.122195 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8"] Jan 31 06:45:28 crc kubenswrapper[4687]: W0131 06:45:28.122947 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72129b1e_6186_43d3_9471_a7e9a3f91ffe.slice/crio-c2d4102689b27342c0cdc93b2c9b28b652d78214a3463680c9ba10349f6b2dc0 WatchSource:0}: Error finding container c2d4102689b27342c0cdc93b2c9b28b652d78214a3463680c9ba10349f6b2dc0: Status 404 returned error can't find the container with id c2d4102689b27342c0cdc93b2c9b28b652d78214a3463680c9ba10349f6b2dc0 Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.136104 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.136514 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.636482735 +0000 UTC m=+154.913742310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.136746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.137560 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.637548935 +0000 UTC m=+154.914808550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: W0131 06:45:28.222558 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40e06c8e_427c_4de8_b3c9_7a10e83ea115.slice/crio-74bb8b784fcd875b926dded2b5aa753d797719e9578ee446c5000ff1aaa0bbc9 WatchSource:0}: Error finding container 74bb8b784fcd875b926dded2b5aa753d797719e9578ee446c5000ff1aaa0bbc9: Status 404 returned error can't find the container with id 74bb8b784fcd875b926dded2b5aa753d797719e9578ee446c5000ff1aaa0bbc9 Jan 31 06:45:28 crc kubenswrapper[4687]: W0131 06:45:28.231574 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25dca60d_d3da_4a23_b32a_cf4654f6298d.slice/crio-f5adfa2edaefe051d5193c1f995ba8c8957bdaba718b7a84d1d8621086e3c67a WatchSource:0}: Error finding container f5adfa2edaefe051d5193c1f995ba8c8957bdaba718b7a84d1d8621086e3c67a: Status 404 returned error can't find the container with id f5adfa2edaefe051d5193c1f995ba8c8957bdaba718b7a84d1d8621086e3c67a Jan 31 06:45:28 crc kubenswrapper[4687]: W0131 06:45:28.231952 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91eb9b4e_6f7a_4e92_8ad8_4bd3668e69d4.slice/crio-ee75fafcd7c04e57532b5c8c40d4fe25a488af384c858deb193c40cf51fa9ac7 WatchSource:0}: Error finding container ee75fafcd7c04e57532b5c8c40d4fe25a488af384c858deb193c40cf51fa9ac7: Status 404 returned error can't find the container with id ee75fafcd7c04e57532b5c8c40d4fe25a488af384c858deb193c40cf51fa9ac7 Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.240902 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.241314 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.741297485 +0000 UTC m=+155.018557060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.277571 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gbcr5" podStartSLOduration=120.27754939 podStartE2EDuration="2m0.27754939s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:28.276707986 +0000 UTC m=+154.553967561" watchObservedRunningTime="2026-01-31 06:45:28.27754939 +0000 UTC m=+154.554808965" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.343032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.343677 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.843635935 +0000 UTC m=+155.120895510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.366651 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-k7lmb" podStartSLOduration=121.366636712 podStartE2EDuration="2m1.366636712s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:28.364402688 +0000 UTC m=+154.641662293" watchObservedRunningTime="2026-01-31 06:45:28.366636712 +0000 UTC m=+154.643896287" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.444660 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.445163 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:28.945145512 +0000 UTC m=+155.222405087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.516757 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.534439 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:28 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:28 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:28 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.534570 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.547811 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.548180 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.048165722 +0000 UTC m=+155.325425297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.648976 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.650093 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.150072289 +0000 UTC m=+155.427331864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.684010 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.684064 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.702390 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" event={"ID":"8766ce0a-e289-4861-9b0a-9b9ad7c0e623","Type":"ContainerStarted","Data":"7f35516cf75e6b87b4c2b33c2d78b2948004ffadd923a62795f4ead8b7e2ede0"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.716603 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vxbfn" event={"ID":"e5b7bf80-e0c2-461f-944b-43b00db98f09","Type":"ContainerStarted","Data":"d51189c44157a750bad0f86e3f87f1e55a0b96e38e83e7aeaf32b5de4bd03b05"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.728863 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" event={"ID":"d2b80006-d9e1-40e5-becc-5764e747f572","Type":"ContainerStarted","Data":"d8f1ee9974aa398ca8def4c8b2a0a5c176338160292cf1ac881123dc13f1f1f4"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.747630 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" event={"ID":"fed9a01f-700b-493d-bb38-7a730dddccb3","Type":"ContainerStarted","Data":"576b4803358c89f0ed9c9754ce876d3129256a7ba844b9bb6633d328d779ab90"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.766791 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.772010 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.271991068 +0000 UTC m=+155.549250643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.778432 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" event={"ID":"04efc7d0-c0f8-44ee-ac0e-5289f770f39e","Type":"ContainerStarted","Data":"2e9601a2056bea77babf0c6be3603f7d99f417dc4c10b4287b033e84a535d12f"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.778487 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" event={"ID":"04efc7d0-c0f8-44ee-ac0e-5289f770f39e","Type":"ContainerStarted","Data":"8d2a9c8c20fd803d912016fdd22da9049d62ddf4c604c439d339054aa97230d0"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.779151 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.784660 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q6qrp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.784728 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" podUID="04efc7d0-c0f8-44ee-ac0e-5289f770f39e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.786013 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" event={"ID":"ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef","Type":"ContainerStarted","Data":"73124d0feb0f77ea8ba2109f0704c5e491ce9227a01ec870e63164b00f51136d"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.808166 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" podStartSLOduration=120.80814326 podStartE2EDuration="2m0.80814326s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:28.769656282 +0000 UTC m=+155.046915867" watchObservedRunningTime="2026-01-31 06:45:28.80814326 +0000 UTC m=+155.085402835" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.820318 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" podStartSLOduration=120.820301167 podStartE2EDuration="2m0.820301167s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:28.806520674 +0000 UTC m=+155.083780249" watchObservedRunningTime="2026-01-31 06:45:28.820301167 +0000 UTC m=+155.097560742" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.868831 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.869624 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.369599013 +0000 UTC m=+155.646858588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.870591 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.878380 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.378354603 +0000 UTC m=+155.655614178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.893805 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vsgwh" podStartSLOduration=120.893778673 podStartE2EDuration="2m0.893778673s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:28.85792745 +0000 UTC m=+155.135187045" watchObservedRunningTime="2026-01-31 06:45:28.893778673 +0000 UTC m=+155.171038248" Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.916917 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" event={"ID":"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef","Type":"ContainerStarted","Data":"b1526fe818a85e9b08191be1768a10deb8b95105d10bcac05636fce97593d697"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.944924 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" event={"ID":"72129b1e-6186-43d3-9471-a7e9a3f91ffe","Type":"ContainerStarted","Data":"c2d4102689b27342c0cdc93b2c9b28b652d78214a3463680c9ba10349f6b2dc0"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.956094 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" event={"ID":"ce8ee922-54be-446b-ab92-e5459763496c","Type":"ContainerStarted","Data":"41b9b6995ee30aadad2f31595ec374eac449af3d13d8ebdb23e97366b23553f5"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.965386 4687 generic.go:334] "Generic (PLEG): container finished" podID="f32026ee-35c2-42bc-aa53-14e8ccc5e136" containerID="cf711bf2501c31e5fe83a7a7b922bea75844c2f7e04317f957a3f969ed9e3364" exitCode=0 Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.965539 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" event={"ID":"f32026ee-35c2-42bc-aa53-14e8ccc5e136","Type":"ContainerDied","Data":"cf711bf2501c31e5fe83a7a7b922bea75844c2f7e04317f957a3f969ed9e3364"} Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.978945 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:28 crc kubenswrapper[4687]: E0131 06:45:28.979399 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.479380186 +0000 UTC m=+155.756639761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:28 crc kubenswrapper[4687]: I0131 06:45:28.981393 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" event={"ID":"dadcab3b-fc57-4a2f-b680-09fc1d6b1dff","Type":"ContainerStarted","Data":"bcf5921510a00787707c76228f77d8b01401eeaf22a3bfb6795299561e01aaec"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.018621 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" event={"ID":"c58d0a19-a26d-4bb4-a46a-4bffe9491a99","Type":"ContainerStarted","Data":"d63959d60332691d892f353b4e66b77a3edc93c3f6f2a6b1496fcc50127474f4"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.030512 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bgn6j" podStartSLOduration=121.030472584 podStartE2EDuration="2m1.030472584s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.007127118 +0000 UTC m=+155.284386693" watchObservedRunningTime="2026-01-31 06:45:29.030472584 +0000 UTC m=+155.307732179" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.031374 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" event={"ID":"787fd9f1-90b0-454b-a9cf-2016db70043d","Type":"ContainerStarted","Data":"eb93a080052df3199520a10eb538eb5ee210e4816f59122426a8c0865cd5acb9"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.031470 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" event={"ID":"787fd9f1-90b0-454b-a9cf-2016db70043d","Type":"ContainerStarted","Data":"158767c7cd38011be22c8e8c43331f52924725e1ee214310c61204d66262f325"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.050180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" event={"ID":"ba4ee6bf-8298-425c-8603-0816ef6d62a2","Type":"ContainerStarted","Data":"9db28e6d5f2fdd47e54c0352395dfce7a2cb6fea53b80f1494519116a7991dc7"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.067354 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jp9kx" event={"ID":"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4","Type":"ContainerStarted","Data":"ee75fafcd7c04e57532b5c8c40d4fe25a488af384c858deb193c40cf51fa9ac7"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.070523 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4qxbh" podStartSLOduration=121.070503576 podStartE2EDuration="2m1.070503576s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.067262244 +0000 UTC m=+155.344521839" watchObservedRunningTime="2026-01-31 06:45:29.070503576 +0000 UTC m=+155.347763151" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.082583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.083279 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.58326688 +0000 UTC m=+155.860526455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.086569 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" event={"ID":"ea6afbaa-a516-45e0-bbd8-199b879e2654","Type":"ContainerStarted","Data":"852231b1387fd3d60836e9358005d35936f0194543fdceb35a1d61c57ac4ea5c"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.089188 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.103672 4687 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6qn9w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.104060 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" podUID="ea6afbaa-a516-45e0-bbd8-199b879e2654" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.128743 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" podStartSLOduration=122.128717397 podStartE2EDuration="2m2.128717397s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.117461026 +0000 UTC m=+155.394720611" watchObservedRunningTime="2026-01-31 06:45:29.128717397 +0000 UTC m=+155.405976982" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.136007 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" event={"ID":"25dca60d-d3da-4a23-b32a-cf4654f6298d","Type":"ContainerStarted","Data":"f5adfa2edaefe051d5193c1f995ba8c8957bdaba718b7a84d1d8621086e3c67a"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.140941 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" event={"ID":"8f3171d3-7275-477b-8c99-cae75ecd914c","Type":"ContainerStarted","Data":"87d03b729f2d0bf75a0d9318d6e20ac89f21c7637790c8251aa1089678989301"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.141014 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" event={"ID":"8f3171d3-7275-477b-8c99-cae75ecd914c","Type":"ContainerStarted","Data":"e4dd9ae217e81b2334032fb42ab9a66e813c960b409c9c95682ef7be2a2dcf17"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.152668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" event={"ID":"abed3680-932f-4c8b-8ff2-3b011b996088","Type":"ContainerStarted","Data":"2e17e2377f67f9a1ed977d25efc5c84d277316230f76e5ab59487410429fb93d"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.161114 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-52dkb" event={"ID":"40e06c8e-427c-4de8-b3c9-7a10e83ea115","Type":"ContainerStarted","Data":"74bb8b784fcd875b926dded2b5aa753d797719e9578ee446c5000ff1aaa0bbc9"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.169071 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" event={"ID":"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73","Type":"ContainerStarted","Data":"dcd44c962412c8d80f8f0ff1a81cba04a412c59f4b1c137072a728a18845a97a"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.170548 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-kv6zt" podStartSLOduration=121.17052619 podStartE2EDuration="2m1.17052619s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.165690402 +0000 UTC m=+155.442949977" watchObservedRunningTime="2026-01-31 06:45:29.17052619 +0000 UTC m=+155.447785765" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.189657 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.190767 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" event={"ID":"702885bb-6915-436f-b925-b4c1e88e5edf","Type":"ContainerStarted","Data":"13fd4d6107dfe8bbf5ebcccf6651424dd8e1fd0fe63b61bc52ff794fa8679a9d"} Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.191182 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.691164329 +0000 UTC m=+155.968423914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.193556 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.202263 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" event={"ID":"86e55636-cd31-4ec5-9e24-2c281d474481","Type":"ContainerStarted","Data":"30a0abbd4381423164e13308197c1e7923449b18db275781274fb08259bb2959"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.219598 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" podStartSLOduration=121.21957849 podStartE2EDuration="2m1.21957849s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.190045897 +0000 UTC m=+155.467305482" watchObservedRunningTime="2026-01-31 06:45:29.21957849 +0000 UTC m=+155.496838065" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.227710 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-52dkb" podStartSLOduration=6.227694361 podStartE2EDuration="6.227694361s" podCreationTimestamp="2026-01-31 06:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.224874611 +0000 UTC m=+155.502134186" watchObservedRunningTime="2026-01-31 06:45:29.227694361 +0000 UTC m=+155.504953936" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.237670 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k67p" event={"ID":"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7","Type":"ContainerStarted","Data":"52607fe6361d6282cbbb88a89bce4a02442affa01542d23d2f8fc35fd0f2524a"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.268307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" event={"ID":"18165c42-63ba-4c65-8ba7-f0e205fc74b7","Type":"ContainerStarted","Data":"85dee9209eb880985603353b4c5b61236f854c87be2e70f87af7bb5338ac807e"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.269058 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.276825 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-jrsbk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.276888 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" podUID="18165c42-63ba-4c65-8ba7-f0e205fc74b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.285342 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" event={"ID":"175a043a-d6f7-4c39-953b-560986f36646","Type":"ContainerStarted","Data":"04aa1e85dae0b8c12e139d1fff2ff7fff3db50a78fe9961ef50050961eb3f9af"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.285397 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.293329 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.295596 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.795579758 +0000 UTC m=+156.072839413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.303150 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wz29h" event={"ID":"97f155cf-b9dc-420d-8540-2b03fab31a5e","Type":"ContainerStarted","Data":"e395015ab557fc5c438d6ead386311c516531ec0db0537ecb6f86c82c273a5e7"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.303955 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-c27wp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.303992 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.316391 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" event={"ID":"7c641c91-1772-452d-b8e5-e2e917fe0f3e","Type":"ContainerStarted","Data":"520a63ea3a6357092ed0193887c8cff0498109223a398353b37f90362e8f9c93"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.317603 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2cqlg" podStartSLOduration=122.317582406 podStartE2EDuration="2m2.317582406s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.317459883 +0000 UTC m=+155.594719458" watchObservedRunningTime="2026-01-31 06:45:29.317582406 +0000 UTC m=+155.594842001" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.338060 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" event={"ID":"5695d1ed-642b-4546-9624-306b27441931","Type":"ContainerStarted","Data":"b00f4b24a1d1dfe9dbef35a47fbd654e67d5f7ba0d3306186f7d47472378fcc3"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.345501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" event={"ID":"cf92b96c-c1bc-4102-a49b-003d08ef9de7","Type":"ContainerStarted","Data":"6089c88854d23eb137c30a23bae5d1831548e97bda4b1f1fcfc728c214eeef04"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.349301 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" event={"ID":"0e99374e-992a-48aa-b353-0e298dfb0889","Type":"ContainerStarted","Data":"13c4aabdbc7323a4c6b28dbc9695ee5eb472cabbce1b42f040f106acdf387f87"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.361224 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" event={"ID":"5ff64219-76d2-4a04-9932-59f5c1619358","Type":"ContainerStarted","Data":"2969bb30c743a76a782aef12bca4ffc2092f2d3504c336021dbcf4e75e1acdb4"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.362173 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wfgl2" podStartSLOduration=121.362161108 podStartE2EDuration="2m1.362161108s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.357612858 +0000 UTC m=+155.634872443" watchObservedRunningTime="2026-01-31 06:45:29.362161108 +0000 UTC m=+155.639420683" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.393681 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qjjpt" podStartSLOduration=122.393661357 podStartE2EDuration="2m2.393661357s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.390881618 +0000 UTC m=+155.668141193" watchObservedRunningTime="2026-01-31 06:45:29.393661357 +0000 UTC m=+155.670920942" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.393756 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" event={"ID":"fcc4167b-60df-4666-b1b5-dc5ea87b7f6e","Type":"ContainerStarted","Data":"ec850bd8b62ac8abd8eef8318112dbb46f4b2769c2129d3c73d90f03c0d40ca6"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.394994 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.397271 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:29.897250179 +0000 UTC m=+156.174509774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.406985 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-crdmb" event={"ID":"c1b4bdad-f662-48bd-b1ae-1a9916973b8b","Type":"ContainerStarted","Data":"7e37b2e93d4616b5ad047c11c401a299c8069db4cee62109ea5e2e093d32aed6"} Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.416000 4687 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8qhsc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.416057 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" podUID="e2e4841e-e880-45f4-8769-cd9fea35654e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.434169 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" podStartSLOduration=121.434149822 podStartE2EDuration="2m1.434149822s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.43407635 +0000 UTC m=+155.711335925" watchObservedRunningTime="2026-01-31 06:45:29.434149822 +0000 UTC m=+155.711409397" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.510177 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wz29h" podStartSLOduration=6.510153341 podStartE2EDuration="6.510153341s" podCreationTimestamp="2026-01-31 06:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.458729254 +0000 UTC m=+155.735988839" watchObservedRunningTime="2026-01-31 06:45:29.510153341 +0000 UTC m=+155.787412916" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.510471 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.514615 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.014597128 +0000 UTC m=+156.291856703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.528075 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:29 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:29 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:29 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.528139 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.542540 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" podStartSLOduration=122.542520375 podStartE2EDuration="2m2.542520375s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.508858704 +0000 UTC m=+155.786118289" watchObservedRunningTime="2026-01-31 06:45:29.542520375 +0000 UTC m=+155.819779950" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.576361 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.613661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.613867 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.113833609 +0000 UTC m=+156.391093184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.614187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.614832 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.114821618 +0000 UTC m=+156.392081193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.616381 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" podStartSLOduration=122.616357361 podStartE2EDuration="2m2.616357361s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.57565685 +0000 UTC m=+155.852916425" watchObservedRunningTime="2026-01-31 06:45:29.616357361 +0000 UTC m=+155.893616936" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.655039 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kv4b4" podStartSLOduration=122.655015584 podStartE2EDuration="2m2.655015584s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.620955963 +0000 UTC m=+155.898215548" watchObservedRunningTime="2026-01-31 06:45:29.655015584 +0000 UTC m=+155.932275159" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.721449 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" podStartSLOduration=122.721427199 podStartE2EDuration="2m2.721427199s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.721119291 +0000 UTC m=+155.998378866" watchObservedRunningTime="2026-01-31 06:45:29.721427199 +0000 UTC m=+155.998686774" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.725447 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.725939 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.225920588 +0000 UTC m=+156.503180163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.755514 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bb2t2" podStartSLOduration=122.755494722 podStartE2EDuration="2m2.755494722s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.753184646 +0000 UTC m=+156.030444231" watchObservedRunningTime="2026-01-31 06:45:29.755494722 +0000 UTC m=+156.032754297" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.792782 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" podStartSLOduration=122.792760875 podStartE2EDuration="2m2.792760875s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.792628231 +0000 UTC m=+156.069887806" watchObservedRunningTime="2026-01-31 06:45:29.792760875 +0000 UTC m=+156.070020450" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.827155 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.827553 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.327537227 +0000 UTC m=+156.604796802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.830811 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-crdmb" podStartSLOduration=122.83079507 podStartE2EDuration="2m2.83079507s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:29.828724661 +0000 UTC m=+156.105984236" watchObservedRunningTime="2026-01-31 06:45:29.83079507 +0000 UTC m=+156.108054645" Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.928363 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.928558 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.428528889 +0000 UTC m=+156.705788464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:29 crc kubenswrapper[4687]: I0131 06:45:29.928932 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:29 crc kubenswrapper[4687]: E0131 06:45:29.929287 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.42927631 +0000 UTC m=+156.706535955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.029831 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.030037 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.530010435 +0000 UTC m=+156.807270010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.030260 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.030708 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.530686814 +0000 UTC m=+156.807946439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.131126 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.131276 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.631258794 +0000 UTC m=+156.908518369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.131365 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.131657 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.631647785 +0000 UTC m=+156.908907370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.200951 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.201187 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.203016 4687 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-q9tfw container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.19:8443/livez\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.203063 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" podUID="d2b80006-d9e1-40e5-becc-5764e747f572" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.19:8443/livez\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.232009 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.232313 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.732296977 +0000 UTC m=+157.009556552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.333211 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.333570 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.833554106 +0000 UTC m=+157.110813681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.412169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxvkh" event={"ID":"5695d1ed-642b-4546-9624-306b27441931","Type":"ContainerStarted","Data":"4e39d96abbf35e5075b8b2b5a789e3f5d46809fad8d9034e5ac48ea19e1ee661"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.414200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" event={"ID":"0e99374e-992a-48aa-b353-0e298dfb0889","Type":"ContainerStarted","Data":"612330cf57e0c67a032ed17148c889c06fa34c22d670695ded8f8e9cd0d7a1e6"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.414243 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" event={"ID":"0e99374e-992a-48aa-b353-0e298dfb0889","Type":"ContainerStarted","Data":"90e63d471a88664e0e0455808cb96dcc53ba7faaf82b0d49bd7a04b2fb1eb705"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.415452 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbgss" event={"ID":"f112d0fe-1fbc-4892-b6ff-81ab1edfcb0b","Type":"ContainerStarted","Data":"ebcb2893cdcb4d129ce4a2524bc43cb1089c4726cbaf273fcb26985bd788ad68"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.417423 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" event={"ID":"72129b1e-6186-43d3-9471-a7e9a3f91ffe","Type":"ContainerStarted","Data":"824960fedbd9d063a652dc9ab57ff8ea4e65c23d3ec09283a3062b594370184d"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.417447 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" event={"ID":"72129b1e-6186-43d3-9471-a7e9a3f91ffe","Type":"ContainerStarted","Data":"3c2d2c2090e4a2a04ac61c9c4881dae4ddec5d45747c795fd6ed77910e23ae8a"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.418453 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" event={"ID":"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73","Type":"ContainerStarted","Data":"62a37bee33a3e0ff1b473058b0b36178547b5ec434b824408de90e093a16c0f4"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.418500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" event={"ID":"c7d4fad4-0654-4a5a-b91f-a5e2c8bd7f73","Type":"ContainerStarted","Data":"264e1010ad3e99e86c3ae38af7203ea352992ed7727af15892ffa6125b5938fe"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.418559 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.419703 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" event={"ID":"c58d0a19-a26d-4bb4-a46a-4bffe9491a99","Type":"ContainerStarted","Data":"fbb55969e5589de996314a6604cd13bbd4dd22b8b044b3d6f7e2b03a74c296a1"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.421858 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jp9kx" event={"ID":"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4","Type":"ContainerStarted","Data":"6f2bd049dd3c6541fc9216374fecef44b6cac68f0367e2a24d7e25f207b7fa8f"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.423287 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-52dkb" event={"ID":"40e06c8e-427c-4de8-b3c9-7a10e83ea115","Type":"ContainerStarted","Data":"06a30465a45e2ad223652b86039e075ef67b326a125570da7319c9d7447f47ba"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.424542 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" event={"ID":"25dca60d-d3da-4a23-b32a-cf4654f6298d","Type":"ContainerStarted","Data":"40fee2f168b38227bf95546751756ea86ebdf6527264adf5a4d3c9813bc09b77"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.424716 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.426320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" event={"ID":"5ff64219-76d2-4a04-9932-59f5c1619358","Type":"ContainerStarted","Data":"35ee33c5723342062358e965958c9245639756461c72ac5bb756b1314ff8f8ae"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.426391 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s56tv" event={"ID":"5ff64219-76d2-4a04-9932-59f5c1619358","Type":"ContainerStarted","Data":"b44f10a75a3eccef10013822235098c500e9cc4ec684a84763f5dcf7d9412e70"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.426548 4687 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m6962 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.426594 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" podUID="25dca60d-d3da-4a23-b32a-cf4654f6298d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.429821 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vxbfn" event={"ID":"e5b7bf80-e0c2-461f-944b-43b00db98f09","Type":"ContainerStarted","Data":"1af305cd6fe2ca103ccf609292d234f13f76e3fe9225aa21d0725a8019d3141c"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.430011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.430179 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bq5j8" podStartSLOduration=122.430163843 podStartE2EDuration="2m2.430163843s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.428923247 +0000 UTC m=+156.706182822" watchObservedRunningTime="2026-01-31 06:45:30.430163843 +0000 UTC m=+156.707423418" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.431834 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.431886 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.432679 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" event={"ID":"ce8ee922-54be-446b-ab92-e5459763496c","Type":"ContainerStarted","Data":"f122652042fb8ad8b809f9a872abf075bf67e930d1be58ee298880b5cc037700"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.433887 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" event={"ID":"7ebcd1d8-3a13-4a8c-859b-b1d8351883ef","Type":"ContainerStarted","Data":"e3f6b72d6c8260d94aec19956714d99f8cabaf8d51c8ac0c771b136a701b5d66"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.434112 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.434164 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.434370 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.934348402 +0000 UTC m=+157.211607977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.435073 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.435525 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:30.935513785 +0000 UTC m=+157.212773360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.435561 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pzw56 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.435601 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" podUID="7ebcd1d8-3a13-4a8c-859b-b1d8351883ef" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.436619 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" event={"ID":"ba4ee6bf-8298-425c-8603-0816ef6d62a2","Type":"ContainerStarted","Data":"f13937e3925005d9b71373d0ca3f178359d0c6941dd98505f62e0ce003492e8f"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.438606 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" event={"ID":"f32026ee-35c2-42bc-aa53-14e8ccc5e136","Type":"ContainerStarted","Data":"367130c04b14b9a7a835cd139f82d0e47d4d82c3b7adcc662fcd2018f7e70a73"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.438725 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.440276 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" event={"ID":"8766ce0a-e289-4861-9b0a-9b9ad7c0e623","Type":"ContainerStarted","Data":"b3ed9ecf03f4dd720ef4238715b9b4af167d0b49a97b7f4f61e12ad1de881855"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.443562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kg8s2" event={"ID":"abed3680-932f-4c8b-8ff2-3b011b996088","Type":"ContainerStarted","Data":"43cf494c03bb21b911d166d766100372c2d3c467af1c0b717821bdd0d1fbef9b"} Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.444057 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q6qrp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.444103 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" podUID="04efc7d0-c0f8-44ee-ac0e-5289f770f39e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.444902 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-jrsbk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.444935 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" podUID="18165c42-63ba-4c65-8ba7-f0e205fc74b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.444992 4687 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6qn9w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.445018 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" podUID="ea6afbaa-a516-45e0-bbd8-199b879e2654" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.450302 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hsxqb" podStartSLOduration=122.450286927 podStartE2EDuration="2m2.450286927s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.447716004 +0000 UTC m=+156.724975579" watchObservedRunningTime="2026-01-31 06:45:30.450286927 +0000 UTC m=+156.727546502" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.496755 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" podStartSLOduration=122.496740812 podStartE2EDuration="2m2.496740812s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.485015448 +0000 UTC m=+156.762275023" watchObservedRunningTime="2026-01-31 06:45:30.496740812 +0000 UTC m=+156.774000387" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.514058 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.522487 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:30 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:30 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:30 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.522553 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.541348 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.544686 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.044653869 +0000 UTC m=+157.321924655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.563299 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.567036 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" podStartSLOduration=122.567017118 podStartE2EDuration="2m2.567017118s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.520632784 +0000 UTC m=+156.797892359" watchObservedRunningTime="2026-01-31 06:45:30.567017118 +0000 UTC m=+156.844276713" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.567710 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44wfw" podStartSLOduration=122.567695157 podStartE2EDuration="2m2.567695157s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.561210842 +0000 UTC m=+156.838470417" watchObservedRunningTime="2026-01-31 06:45:30.567695157 +0000 UTC m=+156.844954732" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.645654 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.646123 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.146108944 +0000 UTC m=+157.423368519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.689727 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2nqsr" podStartSLOduration=123.689708219 podStartE2EDuration="2m3.689708219s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.636159421 +0000 UTC m=+156.913418996" watchObservedRunningTime="2026-01-31 06:45:30.689708219 +0000 UTC m=+156.966967794" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.738494 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" podStartSLOduration=122.738452329 podStartE2EDuration="2m2.738452329s" podCreationTimestamp="2026-01-31 06:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.737550324 +0000 UTC m=+157.014809899" watchObservedRunningTime="2026-01-31 06:45:30.738452329 +0000 UTC m=+157.015711954" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.746518 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.746949 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.246930941 +0000 UTC m=+157.524190516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.800902 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-vxbfn" podStartSLOduration=123.80087978 podStartE2EDuration="2m3.80087978s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.766010825 +0000 UTC m=+157.043270400" watchObservedRunningTime="2026-01-31 06:45:30.80087978 +0000 UTC m=+157.078139355" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.848668 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.849087 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.349073045 +0000 UTC m=+157.626332620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.850346 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-47m2d" podStartSLOduration=123.850332621 podStartE2EDuration="2m3.850332621s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.80228732 +0000 UTC m=+157.079546895" watchObservedRunningTime="2026-01-31 06:45:30.850332621 +0000 UTC m=+157.127592196" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.850792 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" podStartSLOduration=123.850786944 podStartE2EDuration="2m3.850786944s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.848113577 +0000 UTC m=+157.125373162" watchObservedRunningTime="2026-01-31 06:45:30.850786944 +0000 UTC m=+157.128046529" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.888195 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" podStartSLOduration=123.888177811 podStartE2EDuration="2m3.888177811s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:30.887852041 +0000 UTC m=+157.165111646" watchObservedRunningTime="2026-01-31 06:45:30.888177811 +0000 UTC m=+157.165437396" Jan 31 06:45:30 crc kubenswrapper[4687]: I0131 06:45:30.956091 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:30 crc kubenswrapper[4687]: E0131 06:45:30.956501 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.45648177 +0000 UTC m=+157.733741345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.058027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.058455 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.558437889 +0000 UTC m=+157.835697464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.159604 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.159782 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.65975656 +0000 UTC m=+157.937016135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.160096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.160430 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.660422649 +0000 UTC m=+157.937682224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.261058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.261265 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.761235225 +0000 UTC m=+158.038494800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.261458 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.261812 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.761795081 +0000 UTC m=+158.039054656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.362730 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.362923 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.862906517 +0000 UTC m=+158.140166092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.363017 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.363339 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.863320578 +0000 UTC m=+158.140580223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.448813 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jp9kx" event={"ID":"91eb9b4e-6f7a-4e92-8ad8-4bd3668e69d4","Type":"ContainerStarted","Data":"1760906c294b7b1a65b0ca64fd67d0f9e7699c4fad0fe1ab1dc4b8864d311b74"} Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.449883 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.452188 4687 generic.go:334] "Generic (PLEG): container finished" podID="fed9a01f-700b-493d-bb38-7a730dddccb3" containerID="576b4803358c89f0ed9c9754ce876d3129256a7ba844b9bb6633d328d779ab90" exitCode=0 Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.452261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" event={"ID":"fed9a01f-700b-493d-bb38-7a730dddccb3","Type":"ContainerDied","Data":"576b4803358c89f0ed9c9754ce876d3129256a7ba844b9bb6633d328d779ab90"} Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.453773 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k67p" event={"ID":"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7","Type":"ContainerStarted","Data":"78f7b961900dc4660c5972322eb8dbdf0652f035fa41739a57fdc39ba9808cf0"} Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.454915 4687 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m6962 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.454973 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" podUID="25dca60d-d3da-4a23-b32a-cf4654f6298d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.455932 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.455983 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.463918 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.464072 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.964046433 +0000 UTC m=+158.241306008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.464258 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.464607 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:31.964598328 +0000 UTC m=+158.241857903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.518000 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pzw56" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.521753 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:31 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:31 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:31 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.521808 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.525833 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jp9kx" podStartSLOduration=8.525815855 podStartE2EDuration="8.525815855s" podCreationTimestamp="2026-01-31 06:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:31.513227916 +0000 UTC m=+157.790487491" watchObservedRunningTime="2026-01-31 06:45:31.525815855 +0000 UTC m=+157.803075430" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.564827 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.565052 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.065019434 +0000 UTC m=+158.342279009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.565476 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.565800 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.565868 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.567484 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.567929 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.067914226 +0000 UTC m=+158.345173801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.577089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.591809 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.654887 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jrsbk" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.673935 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.674120 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.674157 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.674596 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.174560139 +0000 UTC m=+158.451819714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.689547 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.689549 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.777222 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.777561 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.277547708 +0000 UTC m=+158.554807283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.831866 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.851783 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.856886 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.879507 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.879961 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.37994063 +0000 UTC m=+158.657200205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:31 crc kubenswrapper[4687]: I0131 06:45:31.980863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:31 crc kubenswrapper[4687]: E0131 06:45:31.981261 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.48124606 +0000 UTC m=+158.758505645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.008133 4687 csr.go:261] certificate signing request csr-t4zz8 is approved, waiting to be issued Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.024445 4687 csr.go:257] certificate signing request csr-t4zz8 is issued Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.082128 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.082296 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.582262773 +0000 UTC m=+158.859522348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.082533 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.082901 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.582889881 +0000 UTC m=+158.860149456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.183578 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.183795 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.683772729 +0000 UTC m=+158.961032314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.183926 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.184224 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.684215992 +0000 UTC m=+158.961475567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.285204 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.285385 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.785359858 +0000 UTC m=+159.062619433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.285552 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.285890 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.785882283 +0000 UTC m=+159.063141858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.387610 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.388272 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.888251614 +0000 UTC m=+159.165511199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.490216 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.490568 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:32.990555073 +0000 UTC m=+159.267814648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.549937 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:32 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:32 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:32 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.550278 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.591707 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.592917 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.092897883 +0000 UTC m=+159.370157458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.693158 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.693507 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.193493984 +0000 UTC m=+159.470753559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.794426 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.794705 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.294691111 +0000 UTC m=+159.571950686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.895568 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.895964 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.395947381 +0000 UTC m=+159.673206956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.997184 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.997327 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.497309953 +0000 UTC m=+159.774569528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:32 crc kubenswrapper[4687]: I0131 06:45:32.997783 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:32 crc kubenswrapper[4687]: E0131 06:45:32.998230 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.498210249 +0000 UTC m=+159.775469894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.028791 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 06:40:32 +0000 UTC, rotation deadline is 2026-10-31 10:30:09.933125495 +0000 UTC Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.028818 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6555h44m36.904310163s for next certificate rotation Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.090762 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.099203 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.099613 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.599593571 +0000 UTC m=+159.876853146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.202265 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2hsr\" (UniqueName: \"kubernetes.io/projected/fed9a01f-700b-493d-bb38-7a730dddccb3-kube-api-access-h2hsr\") pod \"fed9a01f-700b-493d-bb38-7a730dddccb3\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.202550 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume\") pod \"fed9a01f-700b-493d-bb38-7a730dddccb3\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.202720 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fed9a01f-700b-493d-bb38-7a730dddccb3-secret-volume\") pod \"fed9a01f-700b-493d-bb38-7a730dddccb3\" (UID: \"fed9a01f-700b-493d-bb38-7a730dddccb3\") " Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.203004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.203340 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.703328701 +0000 UTC m=+159.980588276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.206069 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume" (OuterVolumeSpecName: "config-volume") pod "fed9a01f-700b-493d-bb38-7a730dddccb3" (UID: "fed9a01f-700b-493d-bb38-7a730dddccb3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.216073 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed9a01f-700b-493d-bb38-7a730dddccb3-kube-api-access-h2hsr" (OuterVolumeSpecName: "kube-api-access-h2hsr") pod "fed9a01f-700b-493d-bb38-7a730dddccb3" (UID: "fed9a01f-700b-493d-bb38-7a730dddccb3"). InnerVolumeSpecName "kube-api-access-h2hsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.218854 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed9a01f-700b-493d-bb38-7a730dddccb3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fed9a01f-700b-493d-bb38-7a730dddccb3" (UID: "fed9a01f-700b-493d-bb38-7a730dddccb3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:45:33 crc kubenswrapper[4687]: W0131 06:45:33.220385 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0c98b7783321c520d3e4a25a4cb668a5bde0f3dec8a43441be680c203864600f WatchSource:0}: Error finding container 0c98b7783321c520d3e4a25a4cb668a5bde0f3dec8a43441be680c203864600f: Status 404 returned error can't find the container with id 0c98b7783321c520d3e4a25a4cb668a5bde0f3dec8a43441be680c203864600f Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.234399 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6tt8"] Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.234612 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed9a01f-700b-493d-bb38-7a730dddccb3" containerName="collect-profiles" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.234623 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed9a01f-700b-493d-bb38-7a730dddccb3" containerName="collect-profiles" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.234848 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed9a01f-700b-493d-bb38-7a730dddccb3" containerName="collect-profiles" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.235527 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.256735 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.303932 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.304110 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-catalog-content\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.304153 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pcn4\" (UniqueName: \"kubernetes.io/projected/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-kube-api-access-2pcn4\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.304215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-utilities\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.304245 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2hsr\" (UniqueName: \"kubernetes.io/projected/fed9a01f-700b-493d-bb38-7a730dddccb3-kube-api-access-h2hsr\") on node \"crc\" DevicePath \"\"" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.304256 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fed9a01f-700b-493d-bb38-7a730dddccb3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.304265 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fed9a01f-700b-493d-bb38-7a730dddccb3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.304526 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.804498548 +0000 UTC m=+160.081758133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.307362 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6tt8"] Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.441622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-catalog-content\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.441713 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.441752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pcn4\" (UniqueName: \"kubernetes.io/projected/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-kube-api-access-2pcn4\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.441880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-utilities\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.442367 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-catalog-content\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.442844 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:33.942819855 +0000 UTC m=+160.220079430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.458099 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-utilities\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.458280 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g6md9"] Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.484582 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.496706 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6md9"] Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.505188 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a238d160184924a2466605f494b02c019a6efc71737e97f0d48df784deea1c94"} Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.506057 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.517719 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fa669c408455a7d14092b582a931c9aca66912f508dcd21f3642250c3315a0fe"} Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.519088 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" event={"ID":"fed9a01f-700b-493d-bb38-7a730dddccb3","Type":"ContainerDied","Data":"d58a3fda4ce0f2345afeaa6dd3e628cf4c1c9c06cce53c995f0bc7d2c2ce96dc"} Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.519129 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58a3fda4ce0f2345afeaa6dd3e628cf4c1c9c06cce53c995f0bc7d2c2ce96dc" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.519230 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497365-4d98l" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.520904 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:33 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:33 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:33 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.520944 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k67p" event={"ID":"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7","Type":"ContainerStarted","Data":"043902646e9c5500d313d855daf56799383d2e188e1351f423b8bfba04823779"} Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.520966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k67p" event={"ID":"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7","Type":"ContainerStarted","Data":"87b265e97d9a94f52633f50622bee3e99b84c84aee6561ee59202ded17ce2045"} Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.520953 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.522295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0c98b7783321c520d3e4a25a4cb668a5bde0f3dec8a43441be680c203864600f"} Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.537800 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pcn4\" (UniqueName: \"kubernetes.io/projected/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-kube-api-access-2pcn4\") pod \"certified-operators-w6tt8\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.548241 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.548505 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-utilities\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.548557 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-catalog-content\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.548589 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs85t\" (UniqueName: \"kubernetes.io/projected/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-kube-api-access-vs85t\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.548683 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.048667325 +0000 UTC m=+160.325926900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.581852 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.631182 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l2btx"] Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.632843 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.648523 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2btx"] Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.649300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs85t\" (UniqueName: \"kubernetes.io/projected/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-kube-api-access-vs85t\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.649376 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.649394 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-utilities\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.649468 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-catalog-content\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.649958 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-catalog-content\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.650939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-utilities\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.660662 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.16062359 +0000 UTC m=+160.437883165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.717976 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs85t\" (UniqueName: \"kubernetes.io/projected/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-kube-api-access-vs85t\") pod \"community-operators-g6md9\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.750212 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.750317 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.250299608 +0000 UTC m=+160.527559183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.750511 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6s4\" (UniqueName: \"kubernetes.io/projected/8ed021eb-a227-4014-a487-72aa0de25bac-kube-api-access-tj6s4\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.750558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-utilities\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.750577 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-catalog-content\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.750835 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.751281 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.251265156 +0000 UTC m=+160.528524731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.804183 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j46rp"] Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.806041 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.832980 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j46rp"] Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.847437 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.852962 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.853196 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.353162784 +0000 UTC m=+160.630422359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.853397 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-utilities\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.853441 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-catalog-content\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.853504 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.853532 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj6s4\" (UniqueName: \"kubernetes.io/projected/8ed021eb-a227-4014-a487-72aa0de25bac-kube-api-access-tj6s4\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.854541 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-utilities\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.854645 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-catalog-content\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.854805 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.35479434 +0000 UTC m=+160.632053915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.903666 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj6s4\" (UniqueName: \"kubernetes.io/projected/8ed021eb-a227-4014-a487-72aa0de25bac-kube-api-access-tj6s4\") pod \"certified-operators-l2btx\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.954497 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.954697 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.45467766 +0000 UTC m=+160.731937225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.954843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.954909 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26dw\" (UniqueName: \"kubernetes.io/projected/dceba003-329b-4858-a9d2-7499eef39366-kube-api-access-r26dw\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.954976 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-utilities\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.955003 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-catalog-content\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:33 crc kubenswrapper[4687]: E0131 06:45:33.955318 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.455308528 +0000 UTC m=+160.732568173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:33 crc kubenswrapper[4687]: I0131 06:45:33.963794 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.057077 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.057528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-catalog-content\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.057604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26dw\" (UniqueName: \"kubernetes.io/projected/dceba003-329b-4858-a9d2-7499eef39366-kube-api-access-r26dw\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.057652 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-utilities\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.058094 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-utilities\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.058176 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.558157673 +0000 UTC m=+160.835417258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.058537 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-catalog-content\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.107212 4687 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.108009 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26dw\" (UniqueName: \"kubernetes.io/projected/dceba003-329b-4858-a9d2-7499eef39366-kube-api-access-r26dw\") pod \"community-operators-j46rp\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.131717 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.161381 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.161815 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.66180015 +0000 UTC m=+160.939059725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.215776 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6tt8"] Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.262699 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.263342 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.763326227 +0000 UTC m=+161.040585802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.364996 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.365346 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.865331257 +0000 UTC m=+161.142590832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.467596 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.467813 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.96779048 +0000 UTC m=+161.245050055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.467959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.468264 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:34.968254934 +0000 UTC m=+161.245514509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.522253 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2btx"] Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.526400 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6md9"] Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.528157 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:34 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:34 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:34 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.528189 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:34 crc kubenswrapper[4687]: W0131 06:45:34.537969 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ed021eb_a227_4014_a487_72aa0de25bac.slice/crio-2502c52c2d1cd0bfd1f3e48fb2aa6630612be228eed44211fe5f0b5343fafd74 WatchSource:0}: Error finding container 2502c52c2d1cd0bfd1f3e48fb2aa6630612be228eed44211fe5f0b5343fafd74: Status 404 returned error can't find the container with id 2502c52c2d1cd0bfd1f3e48fb2aa6630612be228eed44211fe5f0b5343fafd74 Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.541819 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"086d11585da1b27cdc2befdc4212cac1b4222571a22b6edccf98d28809d2c07e"} Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.557206 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k67p" event={"ID":"7c8d3ed7-cfa7-413a-bcd8-585109bab7e7","Type":"ContainerStarted","Data":"985b74856e4c825c8b472d4fa60d3684773e328a804b47a5aa93fbda75c515b5"} Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.561722 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0794b0a3862609ab20bf62cc844557d6c3b7444a9af0f99d4bd54cd6617aac26"} Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.561928 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.572531 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.572898 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:35.072879389 +0000 UTC m=+161.350138974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.573714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6tt8" event={"ID":"2a8064f7-2493-4fd0-a460-9d98ebdd1a24","Type":"ContainerStarted","Data":"eac937ee3a418174cb5dfdf245797bcd483c4f9d36220269586ba95c6bbffad9"} Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.590903 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"afa2af3753c71f285bb1277869c7b69b02ace7c40d5b0e4d9c16d969cc4e763d"} Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.616901 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6k67p" podStartSLOduration=11.616874744 podStartE2EDuration="11.616874744s" podCreationTimestamp="2026-01-31 06:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:34.590653706 +0000 UTC m=+160.867913291" watchObservedRunningTime="2026-01-31 06:45:34.616874744 +0000 UTC m=+160.894134329" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.676511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.678283 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:35.178269106 +0000 UTC m=+161.455528681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.737214 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j46rp"] Jan 31 06:45:34 crc kubenswrapper[4687]: W0131 06:45:34.755966 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddceba003_329b_4858_a9d2_7499eef39366.slice/crio-95fb4a6a8808c2b3a4f3c599756db89c2c32fe027b18f45bd693cfffa1242d19 WatchSource:0}: Error finding container 95fb4a6a8808c2b3a4f3c599756db89c2c32fe027b18f45bd693cfffa1242d19: Status 404 returned error can't find the container with id 95fb4a6a8808c2b3a4f3c599756db89c2c32fe027b18f45bd693cfffa1242d19 Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.777174 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.777330 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:35.277304922 +0000 UTC m=+161.554564497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.777467 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.777751 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:35.277738915 +0000 UTC m=+161.554998480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.878825 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.879180 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 06:45:35.379165079 +0000 UTC m=+161.656424654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.891052 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zfg87" Jan 31 06:45:34 crc kubenswrapper[4687]: I0131 06:45:34.983785 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:34 crc kubenswrapper[4687]: E0131 06:45:34.985198 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 06:45:35.485183224 +0000 UTC m=+161.762442799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zm4ws" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.018777 4687 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T06:45:34.107240233Z","Handler":null,"Name":""} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.034241 4687 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.034491 4687 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.085554 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.092582 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.094440 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.095495 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.097395 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.097665 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.141717 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.142039 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.142235 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.152590 4687 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bxz2x container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]log ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]etcd ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/max-in-flight-filter ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 06:45:35 crc kubenswrapper[4687]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-startinformers ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 06:45:35 crc kubenswrapper[4687]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 06:45:35 crc kubenswrapper[4687]: livez check failed Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.153349 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" podUID="ba4ee6bf-8298-425c-8603-0816ef6d62a2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.187488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.187592 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.187621 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.207222 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.207475 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.207501 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.212666 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9tfw" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.243595 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zm4ws\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.289078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.289179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.290831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.318735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.319045 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.390556 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmd6"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.391515 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.393229 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.401088 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmd6"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.453985 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.498026 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzthl\" (UniqueName: \"kubernetes.io/projected/fe701715-9a81-4ba7-be4b-f52834728547-kube-api-access-pzthl\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.498468 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-utilities\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.498491 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-catalog-content\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.518552 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.522830 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:35 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:35 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:35 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.522875 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.564649 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zm4ws"] Jan 31 06:45:35 crc kubenswrapper[4687]: W0131 06:45:35.571051 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e49c821_a661_46f0_bbce_7cc8366fee3f.slice/crio-63d9c4880212e25f8442ea5b30c1cbd7fc1f9f91d0d8ab48764a50ec5c48d018 WatchSource:0}: Error finding container 63d9c4880212e25f8442ea5b30c1cbd7fc1f9f91d0d8ab48764a50ec5c48d018: Status 404 returned error can't find the container with id 63d9c4880212e25f8442ea5b30c1cbd7fc1f9f91d0d8ab48764a50ec5c48d018 Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.595780 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.596667 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.600621 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.600818 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.601671 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.601810 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzthl\" (UniqueName: \"kubernetes.io/projected/fe701715-9a81-4ba7-be4b-f52834728547-kube-api-access-pzthl\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.601956 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-utilities\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.601978 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-catalog-content\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.603052 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-utilities\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.603089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-catalog-content\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.631606 4687 patch_prober.go:28] interesting pod/console-f9d7485db-crdmb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.631663 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-crdmb" podUID="c1b4bdad-f662-48bd-b1ae-1a9916973b8b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.661926 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzthl\" (UniqueName: \"kubernetes.io/projected/fe701715-9a81-4ba7-be4b-f52834728547-kube-api-access-pzthl\") pod \"redhat-marketplace-kpmd6\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.664109 4687 generic.go:334] "Generic (PLEG): container finished" podID="dceba003-329b-4858-a9d2-7499eef39366" containerID="963fc72e7a5eca876faf40595961f07491c7d65d1b0aab34cb731fb96ac9e02f" exitCode=0 Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.664869 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.665484 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.665524 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j46rp" event={"ID":"dceba003-329b-4858-a9d2-7499eef39366","Type":"ContainerDied","Data":"963fc72e7a5eca876faf40595961f07491c7d65d1b0aab34cb731fb96ac9e02f"} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.665550 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.665559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j46rp" event={"ID":"dceba003-329b-4858-a9d2-7499eef39366","Type":"ContainerStarted","Data":"95fb4a6a8808c2b3a4f3c599756db89c2c32fe027b18f45bd693cfffa1242d19"} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.666138 4687 generic.go:334] "Generic (PLEG): container finished" podID="8ed021eb-a227-4014-a487-72aa0de25bac" containerID="2016c73be2e3dfb1602c64078c9d4ebbed9c3653ea93407b1e064237f0062675" exitCode=0 Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.666288 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2btx" event={"ID":"8ed021eb-a227-4014-a487-72aa0de25bac","Type":"ContainerDied","Data":"2016c73be2e3dfb1602c64078c9d4ebbed9c3653ea93407b1e064237f0062675"} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.666310 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2btx" event={"ID":"8ed021eb-a227-4014-a487-72aa0de25bac","Type":"ContainerStarted","Data":"2502c52c2d1cd0bfd1f3e48fb2aa6630612be228eed44211fe5f0b5343fafd74"} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.667367 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.668617 4687 generic.go:334] "Generic (PLEG): container finished" podID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerID="b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe" exitCode=0 Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.668659 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6md9" event={"ID":"12638a02-8cb5-4367-a17a-fc50a1d9ddfb","Type":"ContainerDied","Data":"b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe"} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.668673 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6md9" event={"ID":"12638a02-8cb5-4367-a17a-fc50a1d9ddfb","Type":"ContainerStarted","Data":"277e14c048081285d84cb6f2fd0a83fcf9686efa8b05b16d7b7d90663d347f7f"} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.673942 4687 generic.go:334] "Generic (PLEG): container finished" podID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerID="6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90" exitCode=0 Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.674006 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6tt8" event={"ID":"2a8064f7-2493-4fd0-a460-9d98ebdd1a24","Type":"ContainerDied","Data":"6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90"} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.677273 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" event={"ID":"8e49c821-a661-46f0-bbce-7cc8366fee3f","Type":"ContainerStarted","Data":"63d9c4880212e25f8442ea5b30c1cbd7fc1f9f91d0d8ab48764a50ec5c48d018"} Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.702931 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.703018 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.712146 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.713055 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.804890 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.805374 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.808122 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.811273 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xkfv6"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.812479 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.834072 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkfv6"] Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.837361 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.843359 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6qrp" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.872874 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6962" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.872954 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.872993 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.873057 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.873105 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.908466 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqpxr\" (UniqueName: \"kubernetes.io/projected/267c7942-99ed-42bc-bb0c-3d2a2119267e-kube-api-access-nqpxr\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.908562 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-utilities\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.908612 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-catalog-content\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:35 crc kubenswrapper[4687]: I0131 06:45:35.961853 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.009666 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqpxr\" (UniqueName: \"kubernetes.io/projected/267c7942-99ed-42bc-bb0c-3d2a2119267e-kube-api-access-nqpxr\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.010203 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-utilities\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.010247 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-catalog-content\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.015170 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-catalog-content\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.022541 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-utilities\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.034170 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqpxr\" (UniqueName: \"kubernetes.io/projected/267c7942-99ed-42bc-bb0c-3d2a2119267e-kube-api-access-nqpxr\") pod \"redhat-marketplace-xkfv6\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.090396 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmd6"] Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.136126 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.396292 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkfv6"] Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.400029 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7f5g"] Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.401350 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.404451 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.436176 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7f5g"] Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.469913 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.518104 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:36 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:36 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:36 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.518320 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.537451 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-utilities\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.537512 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-catalog-content\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.537544 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kb7j\" (UniqueName: \"kubernetes.io/projected/3b4dc04b-0379-4855-8b63-4ef29d0d6647-kube-api-access-7kb7j\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.638397 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-utilities\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.638516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-catalog-content\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.638541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kb7j\" (UniqueName: \"kubernetes.io/projected/3b4dc04b-0379-4855-8b63-4ef29d0d6647-kube-api-access-7kb7j\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.639035 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-utilities\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.639975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-catalog-content\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.658450 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kb7j\" (UniqueName: \"kubernetes.io/projected/3b4dc04b-0379-4855-8b63-4ef29d0d6647-kube-api-access-7kb7j\") pod \"redhat-operators-q7f5g\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.691012 4687 generic.go:334] "Generic (PLEG): container finished" podID="fe701715-9a81-4ba7-be4b-f52834728547" containerID="27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622" exitCode=0 Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.691071 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmd6" event={"ID":"fe701715-9a81-4ba7-be4b-f52834728547","Type":"ContainerDied","Data":"27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622"} Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.691095 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmd6" event={"ID":"fe701715-9a81-4ba7-be4b-f52834728547","Type":"ContainerStarted","Data":"f067e37ed712378fd5421bd8c46994c76110f9524c3f3c2e6d2bc37088c3a0ea"} Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.695186 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"88de6162-f1fc-4140-89d2-1ec151ffe6b1","Type":"ContainerStarted","Data":"666bbef31677b5d818d6ca596aa0358e0bbbdfe02fd8d4125e092a99c805ff30"} Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.697207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3","Type":"ContainerStarted","Data":"88d607f057baec785e2d61f7e801126930d4fbfa25213f5fa390e8e6f7f47c2f"} Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.697259 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3","Type":"ContainerStarted","Data":"eef7c1b7fc97b5e93bfd69a60417f601623e631431b89c871e690f4b0c1059f6"} Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.701044 4687 generic.go:334] "Generic (PLEG): container finished" podID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerID="0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3" exitCode=0 Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.701108 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkfv6" event={"ID":"267c7942-99ed-42bc-bb0c-3d2a2119267e","Type":"ContainerDied","Data":"0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3"} Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.701139 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkfv6" event={"ID":"267c7942-99ed-42bc-bb0c-3d2a2119267e","Type":"ContainerStarted","Data":"9a904c446d3fe91bc90076ab7632ee4b16e869de24378702cdd9a620e1f50946"} Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.710615 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" event={"ID":"8e49c821-a661-46f0-bbce-7cc8366fee3f","Type":"ContainerStarted","Data":"e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b"} Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.710978 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:36 crc kubenswrapper[4687]: E0131 06:45:36.720637 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod267c7942_99ed_42bc_bb0c_3d2a2119267e.slice/crio-conmon-0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod267c7942_99ed_42bc_bb0c_3d2a2119267e.slice/crio-0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3.scope\": RecentStats: unable to find data in memory cache]" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.739691 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.739675656 podStartE2EDuration="1.739675656s" podCreationTimestamp="2026-01-31 06:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:36.730475134 +0000 UTC m=+163.007734719" watchObservedRunningTime="2026-01-31 06:45:36.739675656 +0000 UTC m=+163.016935231" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.748779 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.766631 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" podStartSLOduration=129.766615005 podStartE2EDuration="2m9.766615005s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:45:36.763876377 +0000 UTC m=+163.041135952" watchObservedRunningTime="2026-01-31 06:45:36.766615005 +0000 UTC m=+163.043874580" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.796648 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mrjq6"] Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.802184 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.803352 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrjq6"] Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.946481 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-utilities\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.948760 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-catalog-content\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:36 crc kubenswrapper[4687]: I0131 06:45:36.948801 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdj8t\" (UniqueName: \"kubernetes.io/projected/d9539b4b-d10e-4607-9195-0acd7cee10c8-kube-api-access-xdj8t\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.024029 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7f5g"] Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.051128 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-utilities\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.051224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-catalog-content\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.051264 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdj8t\" (UniqueName: \"kubernetes.io/projected/d9539b4b-d10e-4607-9195-0acd7cee10c8-kube-api-access-xdj8t\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.051610 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-utilities\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.051725 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-catalog-content\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.073661 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdj8t\" (UniqueName: \"kubernetes.io/projected/d9539b4b-d10e-4607-9195-0acd7cee10c8-kube-api-access-xdj8t\") pod \"redhat-operators-mrjq6\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.158651 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.501281 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrjq6"] Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.518842 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:37 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:37 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:37 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.518898 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:37 crc kubenswrapper[4687]: W0131 06:45:37.552792 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9539b4b_d10e_4607_9195_0acd7cee10c8.slice/crio-0c6f5861153e6a07b30cdad33611c2d9f12284f39385f73710e7b4e16cdab4b3 WatchSource:0}: Error finding container 0c6f5861153e6a07b30cdad33611c2d9f12284f39385f73710e7b4e16cdab4b3: Status 404 returned error can't find the container with id 0c6f5861153e6a07b30cdad33611c2d9f12284f39385f73710e7b4e16cdab4b3 Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.721736 4687 generic.go:334] "Generic (PLEG): container finished" podID="88de6162-f1fc-4140-89d2-1ec151ffe6b1" containerID="9a35a38e02f70975fc44cc1ec517157ee67f04c481ee212a9bbef29219e67a0b" exitCode=0 Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.722327 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"88de6162-f1fc-4140-89d2-1ec151ffe6b1","Type":"ContainerDied","Data":"9a35a38e02f70975fc44cc1ec517157ee67f04c481ee212a9bbef29219e67a0b"} Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.728371 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrjq6" event={"ID":"d9539b4b-d10e-4607-9195-0acd7cee10c8","Type":"ContainerStarted","Data":"0c6f5861153e6a07b30cdad33611c2d9f12284f39385f73710e7b4e16cdab4b3"} Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.730965 4687 generic.go:334] "Generic (PLEG): container finished" podID="2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3" containerID="88d607f057baec785e2d61f7e801126930d4fbfa25213f5fa390e8e6f7f47c2f" exitCode=0 Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.731073 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3","Type":"ContainerDied","Data":"88d607f057baec785e2d61f7e801126930d4fbfa25213f5fa390e8e6f7f47c2f"} Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.733662 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerID="2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351" exitCode=0 Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.735092 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7f5g" event={"ID":"3b4dc04b-0379-4855-8b63-4ef29d0d6647","Type":"ContainerDied","Data":"2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351"} Jan 31 06:45:37 crc kubenswrapper[4687]: I0131 06:45:37.735119 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7f5g" event={"ID":"3b4dc04b-0379-4855-8b63-4ef29d0d6647","Type":"ContainerStarted","Data":"f4eb7e3048a747dca1d56184f43180e8ecec6eb3e5c7989594c986251c745e91"} Jan 31 06:45:38 crc kubenswrapper[4687]: I0131 06:45:38.518691 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:38 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:38 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:38 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:38 crc kubenswrapper[4687]: I0131 06:45:38.518739 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:38 crc kubenswrapper[4687]: I0131 06:45:38.746199 4687 generic.go:334] "Generic (PLEG): container finished" podID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerID="c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b" exitCode=0 Jan 31 06:45:38 crc kubenswrapper[4687]: I0131 06:45:38.748932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrjq6" event={"ID":"d9539b4b-d10e-4607-9195-0acd7cee10c8","Type":"ContainerDied","Data":"c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b"} Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.116906 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.123976 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.188961 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kubelet-dir\") pod \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\" (UID: \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\") " Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.189018 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kube-api-access\") pod \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\" (UID: \"88de6162-f1fc-4140-89d2-1ec151ffe6b1\") " Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.189047 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kubelet-dir\") pod \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\" (UID: \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\") " Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.189100 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kube-api-access\") pod \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\" (UID: \"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3\") " Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.189102 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "88de6162-f1fc-4140-89d2-1ec151ffe6b1" (UID: "88de6162-f1fc-4140-89d2-1ec151ffe6b1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.189205 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3" (UID: "2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.189560 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.189579 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.197057 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "88de6162-f1fc-4140-89d2-1ec151ffe6b1" (UID: "88de6162-f1fc-4140-89d2-1ec151ffe6b1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.219354 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3" (UID: "2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.291581 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88de6162-f1fc-4140-89d2-1ec151ffe6b1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.291611 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.517801 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:39 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:39 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:39 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.517870 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.758295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3","Type":"ContainerDied","Data":"eef7c1b7fc97b5e93bfd69a60417f601623e631431b89c871e690f4b0c1059f6"} Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.758335 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef7c1b7fc97b5e93bfd69a60417f601623e631431b89c871e690f4b0c1059f6" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.758401 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.765360 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.765267 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"88de6162-f1fc-4140-89d2-1ec151ffe6b1","Type":"ContainerDied","Data":"666bbef31677b5d818d6ca596aa0358e0bbbdfe02fd8d4125e092a99c805ff30"} Jan 31 06:45:39 crc kubenswrapper[4687]: I0131 06:45:39.766186 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="666bbef31677b5d818d6ca596aa0358e0bbbdfe02fd8d4125e092a99c805ff30" Jan 31 06:45:40 crc kubenswrapper[4687]: I0131 06:45:40.147722 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:40 crc kubenswrapper[4687]: I0131 06:45:40.153933 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bxz2x" Jan 31 06:45:40 crc kubenswrapper[4687]: I0131 06:45:40.518837 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:40 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:40 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:40 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:40 crc kubenswrapper[4687]: I0131 06:45:40.519125 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:41 crc kubenswrapper[4687]: I0131 06:45:41.237783 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jp9kx" Jan 31 06:45:41 crc kubenswrapper[4687]: I0131 06:45:41.520189 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:41 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:41 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:41 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:41 crc kubenswrapper[4687]: I0131 06:45:41.520262 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:42 crc kubenswrapper[4687]: I0131 06:45:42.517967 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:42 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:42 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:42 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:42 crc kubenswrapper[4687]: I0131 06:45:42.518044 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:43 crc kubenswrapper[4687]: I0131 06:45:43.518425 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:43 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:43 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:43 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:43 crc kubenswrapper[4687]: I0131 06:45:43.518485 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:44 crc kubenswrapper[4687]: I0131 06:45:44.517707 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:44 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:44 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:44 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:44 crc kubenswrapper[4687]: I0131 06:45:44.517793 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:45 crc kubenswrapper[4687]: I0131 06:45:45.517961 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:45 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:45 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:45 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:45 crc kubenswrapper[4687]: I0131 06:45:45.518036 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:45 crc kubenswrapper[4687]: I0131 06:45:45.616958 4687 patch_prober.go:28] interesting pod/console-f9d7485db-crdmb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 31 06:45:45 crc kubenswrapper[4687]: I0131 06:45:45.617003 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-crdmb" podUID="c1b4bdad-f662-48bd-b1ae-1a9916973b8b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 31 06:45:45 crc kubenswrapper[4687]: I0131 06:45:45.872856 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:45 crc kubenswrapper[4687]: I0131 06:45:45.872880 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:45 crc kubenswrapper[4687]: I0131 06:45:45.872907 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:45 crc kubenswrapper[4687]: I0131 06:45:45.872905 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:46 crc kubenswrapper[4687]: I0131 06:45:46.518344 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:46 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:46 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:46 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:46 crc kubenswrapper[4687]: I0131 06:45:46.518443 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:47 crc kubenswrapper[4687]: I0131 06:45:47.518742 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:47 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:47 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:47 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:47 crc kubenswrapper[4687]: I0131 06:45:47.519062 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:48 crc kubenswrapper[4687]: I0131 06:45:48.517912 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:48 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:48 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:48 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:48 crc kubenswrapper[4687]: I0131 06:45:48.517980 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:49 crc kubenswrapper[4687]: I0131 06:45:49.519456 4687 patch_prober.go:28] interesting pod/router-default-5444994796-k7lmb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 06:45:49 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Jan 31 06:45:49 crc kubenswrapper[4687]: [+]process-running ok Jan 31 06:45:49 crc kubenswrapper[4687]: healthz check failed Jan 31 06:45:49 crc kubenswrapper[4687]: I0131 06:45:49.519537 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k7lmb" podUID="ea0d9432-9215-4303-8914-0b0d4c7e49a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 06:45:49 crc kubenswrapper[4687]: I0131 06:45:49.967267 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:49 crc kubenswrapper[4687]: I0131 06:45:49.975288 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dead0f10-2469-49a4-8d26-93fc90d6451d-metrics-certs\") pod \"network-metrics-daemon-hbxj7\" (UID: \"dead0f10-2469-49a4-8d26-93fc90d6451d\") " pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:50 crc kubenswrapper[4687]: I0131 06:45:50.225147 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbxj7" Jan 31 06:45:50 crc kubenswrapper[4687]: I0131 06:45:50.518418 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:50 crc kubenswrapper[4687]: I0131 06:45:50.521115 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-k7lmb" Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.325327 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.624987 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.628583 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-crdmb" Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.872116 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.872400 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.872152 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.872718 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.872742 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.873234 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"1af305cd6fe2ca103ccf609292d234f13f76e3fe9225aa21d0725a8019d3141c"} pod="openshift-console/downloads-7954f5f757-vxbfn" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.873300 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" containerID="cri-o://1af305cd6fe2ca103ccf609292d234f13f76e3fe9225aa21d0725a8019d3141c" gracePeriod=2 Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.873710 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:45:55 crc kubenswrapper[4687]: I0131 06:45:55.873793 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:45:56 crc kubenswrapper[4687]: I0131 06:45:56.898380 4687 generic.go:334] "Generic (PLEG): container finished" podID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerID="1af305cd6fe2ca103ccf609292d234f13f76e3fe9225aa21d0725a8019d3141c" exitCode=0 Jan 31 06:45:56 crc kubenswrapper[4687]: I0131 06:45:56.898454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vxbfn" event={"ID":"e5b7bf80-e0c2-461f-944b-43b00db98f09","Type":"ContainerDied","Data":"1af305cd6fe2ca103ccf609292d234f13f76e3fe9225aa21d0725a8019d3141c"} Jan 31 06:45:58 crc kubenswrapper[4687]: I0131 06:45:58.684760 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:45:58 crc kubenswrapper[4687]: I0131 06:45:58.685013 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:46:05 crc kubenswrapper[4687]: I0131 06:46:05.872098 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:46:05 crc kubenswrapper[4687]: I0131 06:46:05.872470 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:46:06 crc kubenswrapper[4687]: I0131 06:46:06.156386 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8rhlq" Jan 31 06:46:11 crc kubenswrapper[4687]: I0131 06:46:11.864165 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 06:46:11 crc kubenswrapper[4687]: E0131 06:46:11.926650 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 06:46:11 crc kubenswrapper[4687]: E0131 06:46:11.926844 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdj8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mrjq6_openshift-marketplace(d9539b4b-d10e-4607-9195-0acd7cee10c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 06:46:11 crc kubenswrapper[4687]: E0131 06:46:11.928037 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mrjq6" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" Jan 31 06:46:12 crc kubenswrapper[4687]: E0131 06:46:12.616695 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 06:46:12 crc kubenswrapper[4687]: E0131 06:46:12.616834 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kb7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-q7f5g_openshift-marketplace(3b4dc04b-0379-4855-8b63-4ef29d0d6647): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 06:46:12 crc kubenswrapper[4687]: E0131 06:46:12.618080 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-q7f5g" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" Jan 31 06:46:13 crc kubenswrapper[4687]: E0131 06:46:13.424125 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mrjq6" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" Jan 31 06:46:13 crc kubenswrapper[4687]: E0131 06:46:13.424811 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-q7f5g" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.208479 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 06:46:15 crc kubenswrapper[4687]: E0131 06:46:15.208705 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3" containerName="pruner" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.208717 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3" containerName="pruner" Jan 31 06:46:15 crc kubenswrapper[4687]: E0131 06:46:15.208737 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88de6162-f1fc-4140-89d2-1ec151ffe6b1" containerName="pruner" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.208744 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="88de6162-f1fc-4140-89d2-1ec151ffe6b1" containerName="pruner" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.208837 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d947e2c-dcbc-4bc2-b601-6ba3e7a2bdd3" containerName="pruner" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.208874 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="88de6162-f1fc-4140-89d2-1ec151ffe6b1" containerName="pruner" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.209236 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.211623 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.211785 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.223854 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.297271 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.297341 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:46:15 crc kubenswrapper[4687]: E0131 06:46:15.339167 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 06:46:15 crc kubenswrapper[4687]: E0131 06:46:15.339322 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tj6s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-l2btx_openshift-marketplace(8ed021eb-a227-4014-a487-72aa0de25bac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 06:46:15 crc kubenswrapper[4687]: E0131 06:46:15.340586 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-l2btx" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.398130 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.398203 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.398289 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.418131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.537242 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.872618 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:46:15 crc kubenswrapper[4687]: I0131 06:46:15.872988 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.593606 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.595359 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.607002 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.680447 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-var-lock\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.680577 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.680673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.781809 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-var-lock\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.781891 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-var-lock\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.781912 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.781985 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.782066 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.802787 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:20 crc kubenswrapper[4687]: I0131 06:46:20.925207 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:46:24 crc kubenswrapper[4687]: E0131 06:46:24.483966 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-l2btx" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" Jan 31 06:46:25 crc kubenswrapper[4687]: I0131 06:46:25.871617 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:46:25 crc kubenswrapper[4687]: I0131 06:46:25.871698 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:46:28 crc kubenswrapper[4687]: I0131 06:46:28.683873 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:46:28 crc kubenswrapper[4687]: I0131 06:46:28.684277 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:46:28 crc kubenswrapper[4687]: I0131 06:46:28.684337 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:46:28 crc kubenswrapper[4687]: I0131 06:46:28.684906 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:46:28 crc kubenswrapper[4687]: I0131 06:46:28.684962 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a" gracePeriod=600 Jan 31 06:46:29 crc kubenswrapper[4687]: E0131 06:46:29.585921 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 06:46:29 crc kubenswrapper[4687]: E0131 06:46:29.586125 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pcn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w6tt8_openshift-marketplace(2a8064f7-2493-4fd0-a460-9d98ebdd1a24): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 06:46:29 crc kubenswrapper[4687]: E0131 06:46:29.587327 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w6tt8" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" Jan 31 06:46:34 crc kubenswrapper[4687]: E0131 06:46:34.031397 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 06:46:34 crc kubenswrapper[4687]: E0131 06:46:34.031892 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqpxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xkfv6_openshift-marketplace(267c7942-99ed-42bc-bb0c-3d2a2119267e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 06:46:34 crc kubenswrapper[4687]: E0131 06:46:34.033126 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xkfv6" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" Jan 31 06:46:35 crc kubenswrapper[4687]: I0131 06:46:35.873641 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:46:35 crc kubenswrapper[4687]: I0131 06:46:35.873711 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:46:39 crc kubenswrapper[4687]: I0131 06:46:39.122954 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a" exitCode=0 Jan 31 06:46:39 crc kubenswrapper[4687]: I0131 06:46:39.123232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a"} Jan 31 06:46:39 crc kubenswrapper[4687]: E0131 06:46:39.171125 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xkfv6" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" Jan 31 06:46:39 crc kubenswrapper[4687]: E0131 06:46:39.171526 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w6tt8" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" Jan 31 06:46:39 crc kubenswrapper[4687]: I0131 06:46:39.601958 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 06:46:39 crc kubenswrapper[4687]: W0131 06:46:39.604566 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcf1a3b1f_9dbb_4842_bc99_b7201cac5d74.slice/crio-a902c4db8fc7fcb9b354acf1f89caf494c04a92fb76aa432c41a54b77f1024b9 WatchSource:0}: Error finding container a902c4db8fc7fcb9b354acf1f89caf494c04a92fb76aa432c41a54b77f1024b9: Status 404 returned error can't find the container with id a902c4db8fc7fcb9b354acf1f89caf494c04a92fb76aa432c41a54b77f1024b9 Jan 31 06:46:39 crc kubenswrapper[4687]: I0131 06:46:39.634529 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hbxj7"] Jan 31 06:46:39 crc kubenswrapper[4687]: I0131 06:46:39.639322 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 06:46:40 crc kubenswrapper[4687]: I0131 06:46:40.130056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vxbfn" event={"ID":"e5b7bf80-e0c2-461f-944b-43b00db98f09","Type":"ContainerStarted","Data":"0558bc6cb5e3b180c2f7251cc4db4542d84523fff7dc05e16ebc8b8afebf6442"} Jan 31 06:46:40 crc kubenswrapper[4687]: I0131 06:46:40.131345 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" event={"ID":"dead0f10-2469-49a4-8d26-93fc90d6451d","Type":"ContainerStarted","Data":"603b3ec10b677d5cc444a44439699ca75f3a78b84b1ba32eaa3e278f76184f66"} Jan 31 06:46:40 crc kubenswrapper[4687]: I0131 06:46:40.132450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29","Type":"ContainerStarted","Data":"30a5c02ad6e8029986eb4e696c72745dddc65488eb720de26ddc06c9b8ce42a5"} Jan 31 06:46:40 crc kubenswrapper[4687]: I0131 06:46:40.133370 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74","Type":"ContainerStarted","Data":"a902c4db8fc7fcb9b354acf1f89caf494c04a92fb76aa432c41a54b77f1024b9"} Jan 31 06:46:42 crc kubenswrapper[4687]: E0131 06:46:42.095564 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 06:46:42 crc kubenswrapper[4687]: E0131 06:46:42.096206 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r26dw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j46rp_openshift-marketplace(dceba003-329b-4858-a9d2-7499eef39366): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 06:46:42 crc kubenswrapper[4687]: E0131 06:46:42.097627 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j46rp" podUID="dceba003-329b-4858-a9d2-7499eef39366" Jan 31 06:46:42 crc kubenswrapper[4687]: I0131 06:46:42.145846 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"fae6440a00bffd2c9912563b3a0133e343e7e89f2c4e7a9ccaeea3baa2211238"} Jan 31 06:46:42 crc kubenswrapper[4687]: I0131 06:46:42.147304 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" event={"ID":"dead0f10-2469-49a4-8d26-93fc90d6451d","Type":"ContainerStarted","Data":"9c403b3e310f5011c78508188a79532db6332ed62ba40313c5b1f0665ae79511"} Jan 31 06:46:42 crc kubenswrapper[4687]: I0131 06:46:42.148716 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29","Type":"ContainerStarted","Data":"775babaf394f9da402e208fa168f5a90ea00f3e30218a65475bdeae7a0d9a429"} Jan 31 06:46:42 crc kubenswrapper[4687]: I0131 06:46:42.150296 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74","Type":"ContainerStarted","Data":"d384489ed5584f2adc24a28af5d0fdecbe802f014e5bbbfbbac1f9c926e5976f"} Jan 31 06:46:42 crc kubenswrapper[4687]: I0131 06:46:42.151013 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-vxbfn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 06:46:42 crc kubenswrapper[4687]: I0131 06:46:42.151062 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vxbfn" podUID="e5b7bf80-e0c2-461f-944b-43b00db98f09" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 06:46:42 crc kubenswrapper[4687]: E0131 06:46:42.152700 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j46rp" podUID="dceba003-329b-4858-a9d2-7499eef39366" Jan 31 06:46:43 crc kubenswrapper[4687]: I0131 06:46:43.216194 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=28.216161757 podStartE2EDuration="28.216161757s" podCreationTimestamp="2026-01-31 06:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:46:43.205238373 +0000 UTC m=+229.482497958" watchObservedRunningTime="2026-01-31 06:46:43.216161757 +0000 UTC m=+229.493421362" Jan 31 06:46:43 crc kubenswrapper[4687]: I0131 06:46:43.231267 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=23.231241187 podStartE2EDuration="23.231241187s" podCreationTimestamp="2026-01-31 06:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:46:43.223043029 +0000 UTC m=+229.500302634" watchObservedRunningTime="2026-01-31 06:46:43.231241187 +0000 UTC m=+229.508500772" Jan 31 06:46:44 crc kubenswrapper[4687]: I0131 06:46:44.169244 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf1a3b1f-9dbb-4842-bc99-b7201cac5d74" containerID="d384489ed5584f2adc24a28af5d0fdecbe802f014e5bbbfbbac1f9c926e5976f" exitCode=0 Jan 31 06:46:44 crc kubenswrapper[4687]: I0131 06:46:44.169400 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74","Type":"ContainerDied","Data":"d384489ed5584f2adc24a28af5d0fdecbe802f014e5bbbfbbac1f9c926e5976f"} Jan 31 06:46:44 crc kubenswrapper[4687]: E0131 06:46:44.235951 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 06:46:44 crc kubenswrapper[4687]: E0131 06:46:44.236090 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzthl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kpmd6_openshift-marketplace(fe701715-9a81-4ba7-be4b-f52834728547): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 06:46:44 crc kubenswrapper[4687]: E0131 06:46:44.237295 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kpmd6" podUID="fe701715-9a81-4ba7-be4b-f52834728547" Jan 31 06:46:45 crc kubenswrapper[4687]: E0131 06:46:45.188193 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 06:46:45 crc kubenswrapper[4687]: E0131 06:46:45.188395 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vs85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-g6md9_openshift-marketplace(12638a02-8cb5-4367-a17a-fc50a1d9ddfb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 06:46:45 crc kubenswrapper[4687]: E0131 06:46:45.189642 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-g6md9" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" Jan 31 06:46:45 crc kubenswrapper[4687]: I0131 06:46:45.870981 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:46:45 crc kubenswrapper[4687]: I0131 06:46:45.889598 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-vxbfn" Jan 31 06:47:10 crc kubenswrapper[4687]: I0131 06:47:10.495281 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:47:10 crc kubenswrapper[4687]: I0131 06:47:10.587746 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kube-api-access\") pod \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\" (UID: \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\") " Jan 31 06:47:10 crc kubenswrapper[4687]: I0131 06:47:10.587801 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kubelet-dir\") pod \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\" (UID: \"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74\") " Jan 31 06:47:10 crc kubenswrapper[4687]: I0131 06:47:10.588085 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf1a3b1f-9dbb-4842-bc99-b7201cac5d74" (UID: "cf1a3b1f-9dbb-4842-bc99-b7201cac5d74"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:47:10 crc kubenswrapper[4687]: I0131 06:47:10.593621 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf1a3b1f-9dbb-4842-bc99-b7201cac5d74" (UID: "cf1a3b1f-9dbb-4842-bc99-b7201cac5d74"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:47:10 crc kubenswrapper[4687]: I0131 06:47:10.689496 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 06:47:10 crc kubenswrapper[4687]: I0131 06:47:10.689533 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf1a3b1f-9dbb-4842-bc99-b7201cac5d74-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:47:11 crc kubenswrapper[4687]: I0131 06:47:11.330234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf1a3b1f-9dbb-4842-bc99-b7201cac5d74","Type":"ContainerDied","Data":"a902c4db8fc7fcb9b354acf1f89caf494c04a92fb76aa432c41a54b77f1024b9"} Jan 31 06:47:11 crc kubenswrapper[4687]: I0131 06:47:11.330780 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a902c4db8fc7fcb9b354acf1f89caf494c04a92fb76aa432c41a54b77f1024b9" Jan 31 06:47:11 crc kubenswrapper[4687]: I0131 06:47:11.330337 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 06:47:14 crc kubenswrapper[4687]: I0131 06:47:14.348225 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hbxj7" event={"ID":"dead0f10-2469-49a4-8d26-93fc90d6451d","Type":"ContainerStarted","Data":"725e34b57fa31226521f2695a2936af306f451fc31b3e591f316eb13522af5cb"} Jan 31 06:47:14 crc kubenswrapper[4687]: I0131 06:47:14.374250 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hbxj7" podStartSLOduration=227.374211397 podStartE2EDuration="3m47.374211397s" podCreationTimestamp="2026-01-31 06:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:47:14.367240317 +0000 UTC m=+260.644499892" watchObservedRunningTime="2026-01-31 06:47:14.374211397 +0000 UTC m=+260.651470972" Jan 31 06:47:17 crc kubenswrapper[4687]: E0131 06:47:17.948374 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a8064f7_2493_4fd0_a460_9d98ebdd1a24.slice/crio-conmon-942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b.scope\": RecentStats: unable to find data in memory cache]" Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.374101 4687 generic.go:334] "Generic (PLEG): container finished" podID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerID="942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b" exitCode=0 Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.374170 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6tt8" event={"ID":"2a8064f7-2493-4fd0-a460-9d98ebdd1a24","Type":"ContainerDied","Data":"942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b"} Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.380568 4687 generic.go:334] "Generic (PLEG): container finished" podID="dceba003-329b-4858-a9d2-7499eef39366" containerID="6440e22ec10ad1507f54e35a6eb2c77fb13a3bc7d6db5b0006ae0965f7d232d2" exitCode=0 Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.380621 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j46rp" event={"ID":"dceba003-329b-4858-a9d2-7499eef39366","Type":"ContainerDied","Data":"6440e22ec10ad1507f54e35a6eb2c77fb13a3bc7d6db5b0006ae0965f7d232d2"} Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.382677 4687 generic.go:334] "Generic (PLEG): container finished" podID="fe701715-9a81-4ba7-be4b-f52834728547" containerID="2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b" exitCode=0 Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.382731 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmd6" event={"ID":"fe701715-9a81-4ba7-be4b-f52834728547","Type":"ContainerDied","Data":"2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b"} Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.386013 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrjq6" event={"ID":"d9539b4b-d10e-4607-9195-0acd7cee10c8","Type":"ContainerStarted","Data":"98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb"} Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.388225 4687 generic.go:334] "Generic (PLEG): container finished" podID="8ed021eb-a227-4014-a487-72aa0de25bac" containerID="bf85af373958e1e93d1f8f11d4ac20928993edbe7f6dbb8559d83fe06014bc38" exitCode=0 Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.388333 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2btx" event={"ID":"8ed021eb-a227-4014-a487-72aa0de25bac","Type":"ContainerDied","Data":"bf85af373958e1e93d1f8f11d4ac20928993edbe7f6dbb8559d83fe06014bc38"} Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.390673 4687 generic.go:334] "Generic (PLEG): container finished" podID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerID="1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833" exitCode=0 Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.390707 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6md9" event={"ID":"12638a02-8cb5-4367-a17a-fc50a1d9ddfb","Type":"ContainerDied","Data":"1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833"} Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.393709 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7f5g" event={"ID":"3b4dc04b-0379-4855-8b63-4ef29d0d6647","Type":"ContainerStarted","Data":"980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28"} Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.395881 4687 generic.go:334] "Generic (PLEG): container finished" podID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerID="7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697" exitCode=0 Jan 31 06:47:18 crc kubenswrapper[4687]: I0131 06:47:18.395911 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkfv6" event={"ID":"267c7942-99ed-42bc-bb0c-3d2a2119267e","Type":"ContainerDied","Data":"7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697"} Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.175665 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.176165 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1a3b1f-9dbb-4842-bc99-b7201cac5d74" containerName="pruner" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.176178 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1a3b1f-9dbb-4842-bc99-b7201cac5d74" containerName="pruner" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.176294 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1a3b1f-9dbb-4842-bc99-b7201cac5d74" containerName="pruner" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.176631 4687 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.176927 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029" gracePeriod=15 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.177009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.176997 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae" gracePeriod=15 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.177009 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4" gracePeriod=15 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.177047 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae" gracePeriod=15 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.177026 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580" gracePeriod=15 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178229 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.178443 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178457 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.178474 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178481 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.178492 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178500 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.178509 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178516 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.178530 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178537 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.178549 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178555 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.178563 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178571 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.178585 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178593 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178800 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178817 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178831 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178842 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178852 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.178863 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.179098 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.227992 4687 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.23:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.296661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.296727 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.296761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.296836 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.296856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.296880 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.296916 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.296939 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397554 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397676 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397672 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397699 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397747 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397828 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397889 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397916 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397924 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397944 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.397988 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.403694 4687 generic.go:334] "Generic (PLEG): container finished" podID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerID="98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb" exitCode=0 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.403791 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrjq6" event={"ID":"d9539b4b-d10e-4607-9195-0acd7cee10c8","Type":"ContainerDied","Data":"98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb"} Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.404500 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.404717 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.405716 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerID="980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28" exitCode=0 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.405753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7f5g" event={"ID":"3b4dc04b-0379-4855-8b63-4ef29d0d6647","Type":"ContainerDied","Data":"980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28"} Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.406550 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.406946 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.407201 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.408280 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.409637 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.410332 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae" exitCode=0 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.410354 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae" exitCode=0 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.410369 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4" exitCode=0 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.410381 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580" exitCode=2 Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.410380 4687 scope.go:117] "RemoveContainer" containerID="a9479a340a538a325e165c6878ccef5401183ec7e7da5922ebefd7ec74d04c45" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.411331 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.411571 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.411824 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.412017 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.412385 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.412626 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.412910 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.413121 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.413355 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.413578 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.413768 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.413969 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:19 crc kubenswrapper[4687]: I0131 06:47:19.529329 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.538500 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.23:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-l2btx.188fbdf53b32d069 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-l2btx,UID:8ed021eb-a227-4014-a487-72aa0de25bac,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 06:47:19.537717353 +0000 UTC m=+265.814976928,LastTimestamp:2026-01-31 06:47:19.537717353 +0000 UTC m=+265.814976928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 06:47:19 crc kubenswrapper[4687]: W0131 06:47:19.605093 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e357798674e8f7b61a7a188ed555b7ddf0b8630d2ac7354b3b471af027cd4697 WatchSource:0}: Error finding container e357798674e8f7b61a7a188ed555b7ddf0b8630d2ac7354b3b471af027cd4697: Status 404 returned error can't find the container with id e357798674e8f7b61a7a188ed555b7ddf0b8630d2ac7354b3b471af027cd4697 Jan 31 06:47:19 crc kubenswrapper[4687]: E0131 06:47:19.691856 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.23:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-l2btx.188fbdf53b32d069 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-l2btx,UID:8ed021eb-a227-4014-a487-72aa0de25bac,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 06:47:19.537717353 +0000 UTC m=+265.814976928,LastTimestamp:2026-01-31 06:47:19.537717353 +0000 UTC m=+265.814976928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.417356 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.420548 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" containerID="775babaf394f9da402e208fa168f5a90ea00f3e30218a65475bdeae7a0d9a429" exitCode=0 Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.420617 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29","Type":"ContainerDied","Data":"775babaf394f9da402e208fa168f5a90ea00f3e30218a65475bdeae7a0d9a429"} Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.421252 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.421465 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.421691 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.421942 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.422194 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.422380 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.423427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6tt8" event={"ID":"2a8064f7-2493-4fd0-a460-9d98ebdd1a24","Type":"ContainerStarted","Data":"444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409"} Jan 31 06:47:20 crc kubenswrapper[4687]: I0131 06:47:20.424312 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e357798674e8f7b61a7a188ed555b7ddf0b8630d2ac7354b3b471af027cd4697"} Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.429886 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758"} Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.430865 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: E0131 06:47:21.430899 4687 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.23:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.431133 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.431381 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.431658 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.431904 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.432163 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.432650 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.433094 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.433363 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.433620 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.433876 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.434112 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.434325 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.769023 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.770056 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.770552 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.770830 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.771095 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.771345 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.771654 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.772204 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.840019 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-var-lock\") pod \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.840111 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kube-api-access\") pod \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.840187 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kubelet-dir\") pod \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\" (UID: \"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29\") " Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.840167 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-var-lock" (OuterVolumeSpecName: "var-lock") pod "3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" (UID: "3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.840323 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" (UID: "3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.840474 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.840499 4687 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.845752 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" (UID: "3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:47:21 crc kubenswrapper[4687]: I0131 06:47:21.942975 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.217284 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.218563 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.219103 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.219589 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.220171 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.220369 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.220643 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.220998 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.221293 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.221680 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348125 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348124 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348181 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348244 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348306 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348481 4687 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348512 4687 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.348528 4687 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.436929 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkfv6" event={"ID":"267c7942-99ed-42bc-bb0c-3d2a2119267e","Type":"ContainerStarted","Data":"a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c"} Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.437606 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.437875 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.438168 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.438398 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.438673 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.438891 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.439129 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.439355 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.439661 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.441752 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.442571 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029" exitCode=0 Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.442662 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.442667 4687 scope.go:117] "RemoveContainer" containerID="42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.445049 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.446523 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29","Type":"ContainerDied","Data":"30a5c02ad6e8029986eb4e696c72745dddc65488eb720de26ddc06c9b8ce42a5"} Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.446563 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30a5c02ad6e8029986eb4e696c72745dddc65488eb720de26ddc06c9b8ce42a5" Jan 31 06:47:22 crc kubenswrapper[4687]: E0131 06:47:22.446593 4687 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.23:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.458517 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.458999 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.459346 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.459984 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.460507 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.460737 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.460931 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.461756 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.462697 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.463782 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.464060 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.464448 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.464722 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.464956 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.465269 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.465570 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.465806 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:22 crc kubenswrapper[4687]: I0131 06:47:22.466052 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:23 crc kubenswrapper[4687]: I0131 06:47:23.583368 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:47:23 crc kubenswrapper[4687]: I0131 06:47:23.583496 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:47:23 crc kubenswrapper[4687]: I0131 06:47:23.610068 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 06:47:24 crc kubenswrapper[4687]: I0131 06:47:24.819599 4687 scope.go:117] "RemoveContainer" containerID="9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae" Jan 31 06:47:25 crc kubenswrapper[4687]: E0131 06:47:25.265393 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:47:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:47:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:47:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:47:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1680805611},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: E0131 06:47:25.265847 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: E0131 06:47:25.266048 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: E0131 06:47:25.266232 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: E0131 06:47:25.266659 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: E0131 06:47:25.266691 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.464320 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.608842 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.609445 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.609956 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.611178 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.611937 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.612536 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.613240 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.613916 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:25 crc kubenswrapper[4687]: I0131 06:47:25.949431 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6tt8" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="registry-server" probeResult="failure" output=< Jan 31 06:47:25 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Jan 31 06:47:25 crc kubenswrapper[4687]: > Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.136789 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.137068 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.236335 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.236950 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.237314 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.237727 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.238031 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.238304 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.238587 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.238895 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.239146 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.362921 4687 scope.go:117] "RemoveContainer" containerID="61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.472256 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.508294 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.508862 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.509269 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.509552 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.509814 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.510103 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.510517 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.510720 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:26 crc kubenswrapper[4687]: I0131 06:47:26.511061 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:28 crc kubenswrapper[4687]: E0131 06:47:28.113227 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:28 crc kubenswrapper[4687]: E0131 06:47:28.114070 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:28 crc kubenswrapper[4687]: E0131 06:47:28.114566 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:28 crc kubenswrapper[4687]: E0131 06:47:28.114943 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:28 crc kubenswrapper[4687]: E0131 06:47:28.115191 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:28 crc kubenswrapper[4687]: I0131 06:47:28.115216 4687 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 06:47:28 crc kubenswrapper[4687]: E0131 06:47:28.115365 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="200ms" Jan 31 06:47:28 crc kubenswrapper[4687]: E0131 06:47:28.316243 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="400ms" Jan 31 06:47:28 crc kubenswrapper[4687]: I0131 06:47:28.596210 4687 scope.go:117] "RemoveContainer" containerID="75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580" Jan 31 06:47:28 crc kubenswrapper[4687]: E0131 06:47:28.717064 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="800ms" Jan 31 06:47:29 crc kubenswrapper[4687]: I0131 06:47:29.066648 4687 scope.go:117] "RemoveContainer" containerID="a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029" Jan 31 06:47:29 crc kubenswrapper[4687]: E0131 06:47:29.518756 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="1.6s" Jan 31 06:47:31 crc kubenswrapper[4687]: E0131 06:47:29.692879 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.23:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-l2btx.188fbdf53b32d069 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-l2btx,UID:8ed021eb-a227-4014-a487-72aa0de25bac,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 06:47:19.537717353 +0000 UTC m=+265.814976928,LastTimestamp:2026-01-31 06:47:19.537717353 +0000 UTC m=+265.814976928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.886238 4687 scope.go:117] "RemoveContainer" containerID="992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.908683 4687 scope.go:117] "RemoveContainer" containerID="42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae" Jan 31 06:47:31 crc kubenswrapper[4687]: E0131 06:47:29.909357 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\": container with ID starting with 42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae not found: ID does not exist" containerID="42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.909434 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae"} err="failed to get container status \"42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\": rpc error: code = NotFound desc = could not find container \"42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae\": container with ID starting with 42cc8f82a2866d02b3727e326313c6c4f4351f33eeb0a76e2a878101a17cceae not found: ID does not exist" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.909455 4687 scope.go:117] "RemoveContainer" containerID="9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae" Jan 31 06:47:31 crc kubenswrapper[4687]: E0131 06:47:29.911109 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\": container with ID starting with 9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae not found: ID does not exist" containerID="9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.911151 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae"} err="failed to get container status \"9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\": rpc error: code = NotFound desc = could not find container \"9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae\": container with ID starting with 9eadc74521b69e7a9d7be300a2601998e9ed41f1a6715877808b30907b547aae not found: ID does not exist" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.911177 4687 scope.go:117] "RemoveContainer" containerID="61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4" Jan 31 06:47:31 crc kubenswrapper[4687]: E0131 06:47:29.911616 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\": container with ID starting with 61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4 not found: ID does not exist" containerID="61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.911649 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4"} err="failed to get container status \"61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\": rpc error: code = NotFound desc = could not find container \"61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4\": container with ID starting with 61ab1535a69a2f68d13f19f5e6f6bceb1f618fbe15bfe34ece41551a2bc686b4 not found: ID does not exist" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.911668 4687 scope.go:117] "RemoveContainer" containerID="75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580" Jan 31 06:47:31 crc kubenswrapper[4687]: E0131 06:47:29.912088 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\": container with ID starting with 75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580 not found: ID does not exist" containerID="75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.912124 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580"} err="failed to get container status \"75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\": rpc error: code = NotFound desc = could not find container \"75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580\": container with ID starting with 75870b7f160e3515c369aa36305c004f5812f9c66189f8c276b7a42b5fb14580 not found: ID does not exist" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.912149 4687 scope.go:117] "RemoveContainer" containerID="a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029" Jan 31 06:47:31 crc kubenswrapper[4687]: E0131 06:47:29.912621 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\": container with ID starting with a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029 not found: ID does not exist" containerID="a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.912641 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029"} err="failed to get container status \"a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\": rpc error: code = NotFound desc = could not find container \"a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029\": container with ID starting with a54a26122fbd30465178becbe3dfda8ccd74da723ab26a4c0dbdff805c7e5029 not found: ID does not exist" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.912657 4687 scope.go:117] "RemoveContainer" containerID="992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564" Jan 31 06:47:31 crc kubenswrapper[4687]: E0131 06:47:29.913501 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\": container with ID starting with 992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564 not found: ID does not exist" containerID="992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:29.913574 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564"} err="failed to get container status \"992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\": rpc error: code = NotFound desc = could not find container \"992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564\": container with ID starting with 992787b72308bb14e213b2f924a0bede4e954abc1925c7ca4548505d3142a564 not found: ID does not exist" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.497940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrjq6" event={"ID":"d9539b4b-d10e-4607-9195-0acd7cee10c8","Type":"ContainerStarted","Data":"d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee"} Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.498549 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.498748 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.498955 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.499286 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.499497 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.499651 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.499782 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.499916 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.501156 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j46rp" event={"ID":"dceba003-329b-4858-a9d2-7499eef39366","Type":"ContainerStarted","Data":"46178f3882844e754cc54e999ed3c0f1fce1ca4c536309f64dfac228c8d8d2a3"} Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.501783 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.501939 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.502108 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.502290 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.502460 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.502596 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.502725 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.502867 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: I0131 06:47:30.502998 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:31 crc kubenswrapper[4687]: E0131 06:47:31.119893 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="3.2s" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.525796 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2btx" event={"ID":"8ed021eb-a227-4014-a487-72aa0de25bac","Type":"ContainerStarted","Data":"25c6810dfc2b19b46d120d51a4fc898eed020e65d507c50d8bf13005d344aca3"} Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.528320 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.529119 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.529532 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.529771 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.530019 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.530308 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.531682 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.531933 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.532205 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.533941 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6md9" event={"ID":"12638a02-8cb5-4367-a17a-fc50a1d9ddfb","Type":"ContainerStarted","Data":"3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0"} Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.535035 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.535217 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.535381 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.535604 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.535883 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.536063 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.536225 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.536391 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.536563 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.537773 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7f5g" event={"ID":"3b4dc04b-0379-4855-8b63-4ef29d0d6647","Type":"ContainerStarted","Data":"01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2"} Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.538436 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.538651 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.538867 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.539116 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.539303 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.539490 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.543687 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.544144 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.544579 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.545490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmd6" event={"ID":"fe701715-9a81-4ba7-be4b-f52834728547","Type":"ContainerStarted","Data":"f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245"} Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.546362 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.546546 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.546697 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.546841 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.546981 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.547120 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.547254 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.547429 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.547581 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.631205 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.631806 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.632471 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.632742 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.633046 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.633338 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.633643 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.633908 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.634186 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.634468 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.668883 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.669526 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.670057 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.670574 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.670883 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.671179 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.671452 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.671776 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.672087 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.672372 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.848124 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.848332 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.965093 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:47:33 crc kubenswrapper[4687]: I0131 06:47:33.965154 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.133641 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.133709 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.176878 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.177381 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.177778 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.178198 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.178436 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.178640 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.178986 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.179437 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.179686 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.179863 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: E0131 06:47:34.321025 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" interval="6.4s" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.434080 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.434146 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.559549 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.559597 4687 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97" exitCode=1 Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.559735 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97"} Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.560766 4687 scope.go:117] "RemoveContainer" containerID="270f29d0cea02226e9e95fa259435a93d82235e64448511360e92e96db878b97" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.561132 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.561512 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.561750 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.561986 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.562252 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.562574 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.562761 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.562923 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.563076 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.563261 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.602845 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.603851 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.604332 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.604616 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.606160 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.606551 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.606940 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.607322 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.607615 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.607832 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.608068 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.616059 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee039356-c458-45b0-84a6-c533eec8da86" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.616086 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee039356-c458-45b0-84a6-c533eec8da86" Jan 31 06:47:34 crc kubenswrapper[4687]: E0131 06:47:34.616507 4687 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.616925 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:34 crc kubenswrapper[4687]: W0131 06:47:34.644832 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-36a55d7ce592cb89e1340070e9e55734e0d40d542e9de0cc7b8469e766f1c85c WatchSource:0}: Error finding container 36a55d7ce592cb89e1340070e9e55734e0d40d542e9de0cc7b8469e766f1c85c: Status 404 returned error can't find the container with id 36a55d7ce592cb89e1340070e9e55734e0d40d542e9de0cc7b8469e766f1c85c Jan 31 06:47:34 crc kubenswrapper[4687]: I0131 06:47:34.883359 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g6md9" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="registry-server" probeResult="failure" output=< Jan 31 06:47:34 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Jan 31 06:47:34 crc kubenswrapper[4687]: > Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.006761 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-l2btx" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="registry-server" probeResult="failure" output=< Jan 31 06:47:35 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Jan 31 06:47:35 crc kubenswrapper[4687]: > Jan 31 06:47:35 crc kubenswrapper[4687]: E0131 06:47:35.464722 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:47:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:47:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:47:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T06:47:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1680805611},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: E0131 06:47:35.465300 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: E0131 06:47:35.465640 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: E0131 06:47:35.466004 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: E0131 06:47:35.466281 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: E0131 06:47:35.466307 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.565624 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"36a55d7ce592cb89e1340070e9e55734e0d40d542e9de0cc7b8469e766f1c85c"} Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.617483 4687 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.617924 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.618136 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.618482 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.618959 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.619380 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.620050 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.620359 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.620736 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.621015 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.621239 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.713289 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.713346 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.760542 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.761217 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.761669 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.761931 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.762178 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.762471 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.762698 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.762983 4687 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.763334 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.763642 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.763911 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:35 crc kubenswrapper[4687]: I0131 06:47:35.764195 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.573787 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.574207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4f580753406e47a26f777994014ceabada9400b115d8b93161781871f56e2a94"} Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.574845 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.575073 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.575511 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"588bafd8bef08617e86d96ae284f0940107ee94f100904c5177e2fce7c3932e6"} Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.575526 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.575875 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.576145 4687 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.576444 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.576734 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.576977 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.577235 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.577570 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.577928 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.749917 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:47:36 crc kubenswrapper[4687]: I0131 06:47:36.749971 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.158802 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.158862 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.205055 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.205859 4687 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.206528 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.207261 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.207720 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.208042 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.208422 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.208892 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.209217 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.209658 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.209931 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.210234 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.582037 4687 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="588bafd8bef08617e86d96ae284f0940107ee94f100904c5177e2fce7c3932e6" exitCode=0 Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.582189 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"588bafd8bef08617e86d96ae284f0940107ee94f100904c5177e2fce7c3932e6"} Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.582751 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee039356-c458-45b0-84a6-c533eec8da86" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.582775 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee039356-c458-45b0-84a6-c533eec8da86" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.583180 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: E0131 06:47:37.583237 4687 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.583630 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.583945 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.584241 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.584520 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.584724 4687 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.584915 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.585103 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.585303 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.585531 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.585749 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.619824 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.620345 4687 status_manager.go:851] "Failed to get status for pod" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" pod="openshift-marketplace/certified-operators-w6tt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w6tt8\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.620653 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.621084 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.621296 4687 status_manager.go:851] "Failed to get status for pod" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" pod="openshift-marketplace/certified-operators-l2btx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l2btx\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.621558 4687 status_manager.go:851] "Failed to get status for pod" podUID="dceba003-329b-4858-a9d2-7499eef39366" pod="openshift-marketplace/community-operators-j46rp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j46rp\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.621856 4687 status_manager.go:851] "Failed to get status for pod" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" pod="openshift-marketplace/redhat-operators-mrjq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mrjq6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.622024 4687 status_manager.go:851] "Failed to get status for pod" podUID="fe701715-9a81-4ba7-be4b-f52834728547" pod="openshift-marketplace/redhat-marketplace-kpmd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kpmd6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.622228 4687 status_manager.go:851] "Failed to get status for pod" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" pod="openshift-marketplace/redhat-operators-q7f5g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-q7f5g\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.622505 4687 status_manager.go:851] "Failed to get status for pod" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" pod="openshift-marketplace/redhat-marketplace-xkfv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xkfv6\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.622785 4687 status_manager.go:851] "Failed to get status for pod" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" pod="openshift-marketplace/community-operators-g6md9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g6md9\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.622990 4687 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.23:6443: connect: connection refused" Jan 31 06:47:37 crc kubenswrapper[4687]: I0131 06:47:37.786697 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q7f5g" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="registry-server" probeResult="failure" output=< Jan 31 06:47:37 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Jan 31 06:47:37 crc kubenswrapper[4687]: > Jan 31 06:47:38 crc kubenswrapper[4687]: I0131 06:47:38.602448 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:47:39 crc kubenswrapper[4687]: I0131 06:47:39.601634 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b40c80dcb571ecfa2840a11909eb36e2a8e53b0781f702721f84ce98480c61f9"} Jan 31 06:47:40 crc kubenswrapper[4687]: I0131 06:47:40.608928 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e8254cb819852f26f23553cc56b135f56057ec90debe69a71199b61ca16cfc2"} Jan 31 06:47:41 crc kubenswrapper[4687]: I0131 06:47:41.617832 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26b9d7a4c4bcc0a7419bdf420db61d126565a4e7af11bb7de6b2fefc0eba20d3"} Jan 31 06:47:41 crc kubenswrapper[4687]: I0131 06:47:41.618330 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ae2fc31446ea89389e7a7de93456bdf04371c6b366f667e9890e0a8e0fb5bcdf"} Jan 31 06:47:42 crc kubenswrapper[4687]: I0131 06:47:42.627096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"993d878c4d0cacfa45431a8011f8cd03366a289d17211368b29a3b7559792a5e"} Jan 31 06:47:42 crc kubenswrapper[4687]: I0131 06:47:42.627456 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee039356-c458-45b0-84a6-c533eec8da86" Jan 31 06:47:42 crc kubenswrapper[4687]: I0131 06:47:42.627484 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee039356-c458-45b0-84a6-c533eec8da86" Jan 31 06:47:42 crc kubenswrapper[4687]: I0131 06:47:42.627509 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:42 crc kubenswrapper[4687]: I0131 06:47:42.637828 4687 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:47:42 crc kubenswrapper[4687]: I0131 06:47:42.726647 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="28fb53f9-5887-491b-99a3-50caa3c72a68" Jan 31 06:47:43 crc kubenswrapper[4687]: I0131 06:47:43.085449 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:47:43 crc kubenswrapper[4687]: I0131 06:47:43.091150 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:47:43 crc kubenswrapper[4687]: I0131 06:47:43.632646 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee039356-c458-45b0-84a6-c533eec8da86" Jan 31 06:47:43 crc kubenswrapper[4687]: I0131 06:47:43.634115 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee039356-c458-45b0-84a6-c533eec8da86" Jan 31 06:47:43 crc kubenswrapper[4687]: I0131 06:47:43.657245 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="28fb53f9-5887-491b-99a3-50caa3c72a68" Jan 31 06:47:43 crc kubenswrapper[4687]: I0131 06:47:43.897285 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:47:43 crc kubenswrapper[4687]: I0131 06:47:43.952401 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:47:44 crc kubenswrapper[4687]: I0131 06:47:44.002421 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:47:44 crc kubenswrapper[4687]: I0131 06:47:44.039707 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:47:44 crc kubenswrapper[4687]: I0131 06:47:44.175503 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:47:45 crc kubenswrapper[4687]: I0131 06:47:45.757786 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:47:46 crc kubenswrapper[4687]: I0131 06:47:46.792635 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:47:46 crc kubenswrapper[4687]: I0131 06:47:46.837203 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:47:48 crc kubenswrapper[4687]: I0131 06:47:48.606213 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 06:47:55 crc kubenswrapper[4687]: I0131 06:47:55.131399 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 06:47:55 crc kubenswrapper[4687]: I0131 06:47:55.200647 4687 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 06:47:56 crc kubenswrapper[4687]: I0131 06:47:56.364272 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 06:47:56 crc kubenswrapper[4687]: I0131 06:47:56.945778 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 06:47:56 crc kubenswrapper[4687]: I0131 06:47:56.980297 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 06:47:57 crc kubenswrapper[4687]: I0131 06:47:57.244582 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 06:47:57 crc kubenswrapper[4687]: I0131 06:47:57.393085 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 06:47:58 crc kubenswrapper[4687]: I0131 06:47:58.408396 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 06:47:58 crc kubenswrapper[4687]: I0131 06:47:58.593222 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 06:47:58 crc kubenswrapper[4687]: I0131 06:47:58.650767 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 06:47:58 crc kubenswrapper[4687]: I0131 06:47:58.683174 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 06:47:58 crc kubenswrapper[4687]: I0131 06:47:58.717761 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 06:47:58 crc kubenswrapper[4687]: I0131 06:47:58.775191 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 06:47:58 crc kubenswrapper[4687]: I0131 06:47:58.796991 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.036645 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.044668 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.133902 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.144738 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.216635 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.306266 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.315852 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.341945 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.346282 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.371032 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.515578 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.563264 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.685126 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.732334 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.779306 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.779375 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.783040 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.790781 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 06:47:59 crc kubenswrapper[4687]: I0131 06:47:59.827214 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 06:48:00 crc kubenswrapper[4687]: I0131 06:48:00.053172 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 06:48:00 crc kubenswrapper[4687]: I0131 06:48:00.453988 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 06:48:00 crc kubenswrapper[4687]: I0131 06:48:00.483220 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 06:48:00 crc kubenswrapper[4687]: I0131 06:48:00.669519 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 06:48:00 crc kubenswrapper[4687]: I0131 06:48:00.674020 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 06:48:00 crc kubenswrapper[4687]: I0131 06:48:00.741099 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 06:48:00 crc kubenswrapper[4687]: I0131 06:48:00.858387 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 06:48:00 crc kubenswrapper[4687]: I0131 06:48:00.880450 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.084843 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.170162 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.184967 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.308977 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.342606 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.467534 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.505063 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.565921 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.691782 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.851759 4687 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.861726 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 06:48:01 crc kubenswrapper[4687]: I0131 06:48:01.992622 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.058316 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.094191 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.179287 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.288549 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.437155 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.455663 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.499854 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.575962 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 06:48:02 crc kubenswrapper[4687]: I0131 06:48:02.624144 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.002744 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.095239 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.112704 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.246093 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.311844 4687 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.478081 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.663883 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.684953 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.756956 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.833592 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 06:48:03 crc kubenswrapper[4687]: I0131 06:48:03.913289 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.090907 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.162294 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.200202 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.222678 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.309138 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.326897 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.367962 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.371259 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.380350 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.982272 4687 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 06:48:04 crc kubenswrapper[4687]: I0131 06:48:04.989704 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.070561 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.127350 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.488926 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.490066 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.490071 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.490240 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.497021 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.515043 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.536198 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.710123 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.733746 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.888483 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.913991 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 06:48:05 crc kubenswrapper[4687]: I0131 06:48:05.993186 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.043336 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.120841 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.123557 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.147918 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.252382 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.263441 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.416740 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.432693 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.497183 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.656792 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.661629 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.671671 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.884924 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 06:48:06 crc kubenswrapper[4687]: I0131 06:48:06.960549 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.000508 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.033077 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.043286 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.065975 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.096801 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.109331 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.258250 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.304876 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.360208 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.470872 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.491775 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.498114 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.659277 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.665505 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.684871 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.715001 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.880703 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 06:48:07 crc kubenswrapper[4687]: I0131 06:48:07.900468 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.036655 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.038614 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.157581 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.223608 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.234934 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.345725 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.350596 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.419579 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.543718 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 06:48:08 crc kubenswrapper[4687]: I0131 06:48:08.886523 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 06:48:09 crc kubenswrapper[4687]: I0131 06:48:09.681321 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 06:48:09 crc kubenswrapper[4687]: I0131 06:48:09.800723 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 06:48:09 crc kubenswrapper[4687]: I0131 06:48:09.993381 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 06:48:10 crc kubenswrapper[4687]: I0131 06:48:10.267250 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 06:48:10 crc kubenswrapper[4687]: I0131 06:48:10.543316 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 06:48:10 crc kubenswrapper[4687]: I0131 06:48:10.654196 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 06:48:10 crc kubenswrapper[4687]: I0131 06:48:10.700743 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 06:48:10 crc kubenswrapper[4687]: I0131 06:48:10.837248 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 06:48:11 crc kubenswrapper[4687]: I0131 06:48:11.005477 4687 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 06:48:11 crc kubenswrapper[4687]: I0131 06:48:11.270342 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 06:48:11 crc kubenswrapper[4687]: I0131 06:48:11.778871 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 06:48:15 crc kubenswrapper[4687]: I0131 06:48:15.283462 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 06:48:17 crc kubenswrapper[4687]: I0131 06:48:17.516447 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 06:48:17 crc kubenswrapper[4687]: I0131 06:48:17.543007 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 06:48:18 crc kubenswrapper[4687]: I0131 06:48:18.067764 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 06:48:19 crc kubenswrapper[4687]: I0131 06:48:19.605231 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 06:48:19 crc kubenswrapper[4687]: I0131 06:48:19.957487 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 06:48:21 crc kubenswrapper[4687]: I0131 06:48:21.069069 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 06:48:21 crc kubenswrapper[4687]: I0131 06:48:21.259909 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 06:48:22 crc kubenswrapper[4687]: I0131 06:48:22.202233 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 06:48:22 crc kubenswrapper[4687]: I0131 06:48:22.336592 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 06:48:22 crc kubenswrapper[4687]: I0131 06:48:22.575643 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 06:48:22 crc kubenswrapper[4687]: I0131 06:48:22.668074 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 06:48:23 crc kubenswrapper[4687]: I0131 06:48:23.255527 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 06:48:24 crc kubenswrapper[4687]: I0131 06:48:24.392361 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 06:48:24 crc kubenswrapper[4687]: I0131 06:48:24.664898 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 06:48:25 crc kubenswrapper[4687]: I0131 06:48:25.581072 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 06:48:26 crc kubenswrapper[4687]: I0131 06:48:26.119278 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 06:48:26 crc kubenswrapper[4687]: I0131 06:48:26.125863 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 06:48:26 crc kubenswrapper[4687]: I0131 06:48:26.909050 4687 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 06:48:27 crc kubenswrapper[4687]: I0131 06:48:27.146179 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 06:48:27 crc kubenswrapper[4687]: I0131 06:48:27.181075 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 06:48:27 crc kubenswrapper[4687]: I0131 06:48:27.356861 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 06:48:27 crc kubenswrapper[4687]: I0131 06:48:27.420068 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 06:48:27 crc kubenswrapper[4687]: I0131 06:48:27.753510 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 06:48:28 crc kubenswrapper[4687]: I0131 06:48:28.163602 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 06:48:28 crc kubenswrapper[4687]: I0131 06:48:28.791772 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 06:48:29 crc kubenswrapper[4687]: I0131 06:48:29.408229 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 06:48:29 crc kubenswrapper[4687]: I0131 06:48:29.483817 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 06:48:29 crc kubenswrapper[4687]: I0131 06:48:29.753845 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 06:48:29 crc kubenswrapper[4687]: I0131 06:48:29.986498 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 06:48:30 crc kubenswrapper[4687]: I0131 06:48:30.497444 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 06:48:30 crc kubenswrapper[4687]: I0131 06:48:30.671439 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 06:48:30 crc kubenswrapper[4687]: I0131 06:48:30.873251 4687 generic.go:334] "Generic (PLEG): container finished" podID="175a043a-d6f7-4c39-953b-560986f36646" containerID="04aa1e85dae0b8c12e139d1fff2ff7fff3db50a78fe9961ef50050961eb3f9af" exitCode=0 Jan 31 06:48:30 crc kubenswrapper[4687]: I0131 06:48:30.873290 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" event={"ID":"175a043a-d6f7-4c39-953b-560986f36646","Type":"ContainerDied","Data":"04aa1e85dae0b8c12e139d1fff2ff7fff3db50a78fe9961ef50050961eb3f9af"} Jan 31 06:48:30 crc kubenswrapper[4687]: I0131 06:48:30.873728 4687 scope.go:117] "RemoveContainer" containerID="04aa1e85dae0b8c12e139d1fff2ff7fff3db50a78fe9961ef50050961eb3f9af" Jan 31 06:48:31 crc kubenswrapper[4687]: I0131 06:48:31.159444 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 06:48:31 crc kubenswrapper[4687]: I0131 06:48:31.881462 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c27wp_175a043a-d6f7-4c39-953b-560986f36646/marketplace-operator/1.log" Jan 31 06:48:31 crc kubenswrapper[4687]: I0131 06:48:31.882946 4687 generic.go:334] "Generic (PLEG): container finished" podID="175a043a-d6f7-4c39-953b-560986f36646" containerID="2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156" exitCode=1 Jan 31 06:48:31 crc kubenswrapper[4687]: I0131 06:48:31.882995 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" event={"ID":"175a043a-d6f7-4c39-953b-560986f36646","Type":"ContainerDied","Data":"2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156"} Jan 31 06:48:31 crc kubenswrapper[4687]: I0131 06:48:31.883054 4687 scope.go:117] "RemoveContainer" containerID="04aa1e85dae0b8c12e139d1fff2ff7fff3db50a78fe9961ef50050961eb3f9af" Jan 31 06:48:31 crc kubenswrapper[4687]: I0131 06:48:31.883614 4687 scope.go:117] "RemoveContainer" containerID="2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156" Jan 31 06:48:31 crc kubenswrapper[4687]: E0131 06:48:31.883934 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-c27wp_openshift-marketplace(175a043a-d6f7-4c39-953b-560986f36646)\"" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" podUID="175a043a-d6f7-4c39-953b-560986f36646" Jan 31 06:48:32 crc kubenswrapper[4687]: I0131 06:48:32.007316 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 06:48:32 crc kubenswrapper[4687]: I0131 06:48:32.283823 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 06:48:32 crc kubenswrapper[4687]: I0131 06:48:32.475952 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 06:48:32 crc kubenswrapper[4687]: I0131 06:48:32.889819 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c27wp_175a043a-d6f7-4c39-953b-560986f36646/marketplace-operator/1.log" Jan 31 06:48:32 crc kubenswrapper[4687]: I0131 06:48:32.909928 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 06:48:32 crc kubenswrapper[4687]: I0131 06:48:32.999268 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 06:48:33 crc kubenswrapper[4687]: I0131 06:48:33.094654 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 06:48:33 crc kubenswrapper[4687]: I0131 06:48:33.131082 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 06:48:33 crc kubenswrapper[4687]: I0131 06:48:33.166345 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 06:48:33 crc kubenswrapper[4687]: I0131 06:48:33.258304 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 06:48:34 crc kubenswrapper[4687]: I0131 06:48:34.084738 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 06:48:34 crc kubenswrapper[4687]: I0131 06:48:34.141846 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 06:48:34 crc kubenswrapper[4687]: I0131 06:48:34.238858 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 06:48:34 crc kubenswrapper[4687]: I0131 06:48:34.340901 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 06:48:34 crc kubenswrapper[4687]: I0131 06:48:34.647181 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 06:48:34 crc kubenswrapper[4687]: I0131 06:48:34.709229 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 06:48:34 crc kubenswrapper[4687]: I0131 06:48:34.736562 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 06:48:34 crc kubenswrapper[4687]: I0131 06:48:34.894085 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 06:48:35 crc kubenswrapper[4687]: I0131 06:48:35.030685 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 06:48:35 crc kubenswrapper[4687]: I0131 06:48:35.311227 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 06:48:35 crc kubenswrapper[4687]: I0131 06:48:35.337747 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 06:48:35 crc kubenswrapper[4687]: I0131 06:48:35.359881 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 06:48:35 crc kubenswrapper[4687]: I0131 06:48:35.462112 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 06:48:35 crc kubenswrapper[4687]: I0131 06:48:35.563279 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:48:35 crc kubenswrapper[4687]: I0131 06:48:35.563328 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:48:35 crc kubenswrapper[4687]: I0131 06:48:35.563902 4687 scope.go:117] "RemoveContainer" containerID="2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156" Jan 31 06:48:35 crc kubenswrapper[4687]: E0131 06:48:35.564161 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-c27wp_openshift-marketplace(175a043a-d6f7-4c39-953b-560986f36646)\"" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" podUID="175a043a-d6f7-4c39-953b-560986f36646" Jan 31 06:48:36 crc kubenswrapper[4687]: I0131 06:48:36.051121 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 06:48:37 crc kubenswrapper[4687]: I0131 06:48:37.106027 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 06:48:37 crc kubenswrapper[4687]: I0131 06:48:37.632747 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 06:48:37 crc kubenswrapper[4687]: I0131 06:48:37.687094 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 06:48:37 crc kubenswrapper[4687]: I0131 06:48:37.723515 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 06:48:38 crc kubenswrapper[4687]: I0131 06:48:38.219042 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 06:48:38 crc kubenswrapper[4687]: I0131 06:48:38.892804 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 06:48:38 crc kubenswrapper[4687]: I0131 06:48:38.906013 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 06:48:38 crc kubenswrapper[4687]: I0131 06:48:38.976960 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 06:48:39 crc kubenswrapper[4687]: I0131 06:48:39.010005 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 06:48:39 crc kubenswrapper[4687]: I0131 06:48:39.061927 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 06:48:39 crc kubenswrapper[4687]: I0131 06:48:39.116875 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 06:48:40 crc kubenswrapper[4687]: I0131 06:48:40.153623 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 06:48:40 crc kubenswrapper[4687]: I0131 06:48:40.164288 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 06:48:40 crc kubenswrapper[4687]: I0131 06:48:40.392668 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 06:48:40 crc kubenswrapper[4687]: I0131 06:48:40.614685 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 06:48:40 crc kubenswrapper[4687]: I0131 06:48:40.827039 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 06:48:41 crc kubenswrapper[4687]: I0131 06:48:41.300085 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 06:48:41 crc kubenswrapper[4687]: I0131 06:48:41.411781 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 06:48:42 crc kubenswrapper[4687]: I0131 06:48:42.577840 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 06:48:42 crc kubenswrapper[4687]: I0131 06:48:42.678075 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 06:48:42 crc kubenswrapper[4687]: I0131 06:48:42.689643 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 06:48:42 crc kubenswrapper[4687]: I0131 06:48:42.844679 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.155625 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.182277 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.563958 4687 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.564184 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g6md9" podStartSLOduration=72.953815787 podStartE2EDuration="3m10.564172866s" podCreationTimestamp="2026-01-31 06:45:33 +0000 UTC" firstStartedPulling="2026-01-31 06:45:35.669547491 +0000 UTC m=+161.946807066" lastFinishedPulling="2026-01-31 06:47:33.27990456 +0000 UTC m=+279.557164145" observedRunningTime="2026-01-31 06:47:42.724299269 +0000 UTC m=+289.001558854" watchObservedRunningTime="2026-01-31 06:48:43.564172866 +0000 UTC m=+349.841432441" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.564559 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kpmd6" podStartSLOduration=71.970174387 podStartE2EDuration="3m8.564555247s" podCreationTimestamp="2026-01-31 06:45:35 +0000 UTC" firstStartedPulling="2026-01-31 06:45:36.692399607 +0000 UTC m=+162.969659182" lastFinishedPulling="2026-01-31 06:47:33.286780467 +0000 UTC m=+279.564040042" observedRunningTime="2026-01-31 06:47:42.679616805 +0000 UTC m=+288.956876390" watchObservedRunningTime="2026-01-31 06:48:43.564555247 +0000 UTC m=+349.841814812" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.565388 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mrjq6" podStartSLOduration=76.438891145 podStartE2EDuration="3m7.565382189s" podCreationTimestamp="2026-01-31 06:45:36 +0000 UTC" firstStartedPulling="2026-01-31 06:45:38.759895471 +0000 UTC m=+165.037155046" lastFinishedPulling="2026-01-31 06:47:29.886386515 +0000 UTC m=+276.163646090" observedRunningTime="2026-01-31 06:47:42.666935421 +0000 UTC m=+288.944194996" watchObservedRunningTime="2026-01-31 06:48:43.565382189 +0000 UTC m=+349.842641754" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.565619 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xkfv6" podStartSLOduration=83.859788335 podStartE2EDuration="3m8.565614896s" podCreationTimestamp="2026-01-31 06:45:35 +0000 UTC" firstStartedPulling="2026-01-31 06:45:36.704556564 +0000 UTC m=+162.981816139" lastFinishedPulling="2026-01-31 06:47:21.410383125 +0000 UTC m=+267.687642700" observedRunningTime="2026-01-31 06:47:42.711205993 +0000 UTC m=+288.988465568" watchObservedRunningTime="2026-01-31 06:48:43.565614896 +0000 UTC m=+349.842874471" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.565864 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l2btx" podStartSLOduration=72.953480275 podStartE2EDuration="3m10.565856622s" podCreationTimestamp="2026-01-31 06:45:33 +0000 UTC" firstStartedPulling="2026-01-31 06:45:35.66776114 +0000 UTC m=+161.945020725" lastFinishedPulling="2026-01-31 06:47:33.280137497 +0000 UTC m=+279.557397072" observedRunningTime="2026-01-31 06:47:42.811811806 +0000 UTC m=+289.089071391" watchObservedRunningTime="2026-01-31 06:48:43.565856622 +0000 UTC m=+349.843116197" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.566536 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j46rp" podStartSLOduration=79.87027213 podStartE2EDuration="3m10.566531681s" podCreationTimestamp="2026-01-31 06:45:33 +0000 UTC" firstStartedPulling="2026-01-31 06:45:35.666827444 +0000 UTC m=+161.944087019" lastFinishedPulling="2026-01-31 06:47:26.363086995 +0000 UTC m=+272.640346570" observedRunningTime="2026-01-31 06:47:42.654447571 +0000 UTC m=+288.931707156" watchObservedRunningTime="2026-01-31 06:48:43.566531681 +0000 UTC m=+349.843791256" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.567083 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6tt8" podStartSLOduration=86.695572404 podStartE2EDuration="3m10.567078756s" podCreationTimestamp="2026-01-31 06:45:33 +0000 UTC" firstStartedPulling="2026-01-31 06:45:35.676877261 +0000 UTC m=+161.954136836" lastFinishedPulling="2026-01-31 06:47:19.548383583 +0000 UTC m=+265.825643188" observedRunningTime="2026-01-31 06:47:42.753711398 +0000 UTC m=+289.030970993" watchObservedRunningTime="2026-01-31 06:48:43.567078756 +0000 UTC m=+349.844338331" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.567729 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7f5g" podStartSLOduration=72.851708032 podStartE2EDuration="3m7.567724683s" podCreationTimestamp="2026-01-31 06:45:36 +0000 UTC" firstStartedPulling="2026-01-31 06:45:37.739164796 +0000 UTC m=+164.016424371" lastFinishedPulling="2026-01-31 06:47:32.455181447 +0000 UTC m=+278.732441022" observedRunningTime="2026-01-31 06:47:42.694952962 +0000 UTC m=+288.972212537" watchObservedRunningTime="2026-01-31 06:48:43.567724683 +0000 UTC m=+349.844984248" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.568718 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.568754 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.577001 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.603359 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=61.603333405 podStartE2EDuration="1m1.603333405s" podCreationTimestamp="2026-01-31 06:47:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:48:43.601122275 +0000 UTC m=+349.878381850" watchObservedRunningTime="2026-01-31 06:48:43.603333405 +0000 UTC m=+349.880593020" Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.617556 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 06:48:43 crc kubenswrapper[4687]: I0131 06:48:43.700701 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 06:48:44 crc kubenswrapper[4687]: I0131 06:48:44.053563 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 06:48:44 crc kubenswrapper[4687]: I0131 06:48:44.511798 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 06:48:44 crc kubenswrapper[4687]: I0131 06:48:44.617284 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:48:44 crc kubenswrapper[4687]: I0131 06:48:44.617354 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:48:44 crc kubenswrapper[4687]: I0131 06:48:44.621775 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:48:44 crc kubenswrapper[4687]: I0131 06:48:44.633399 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=1.633380734 podStartE2EDuration="1.633380734s" podCreationTimestamp="2026-01-31 06:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:48:44.631197085 +0000 UTC m=+350.908456660" watchObservedRunningTime="2026-01-31 06:48:44.633380734 +0000 UTC m=+350.910640309" Jan 31 06:48:44 crc kubenswrapper[4687]: I0131 06:48:44.750133 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 06:48:44 crc kubenswrapper[4687]: I0131 06:48:44.949322 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 06:48:45 crc kubenswrapper[4687]: I0131 06:48:45.437901 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 06:48:46 crc kubenswrapper[4687]: I0131 06:48:46.122891 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 06:48:46 crc kubenswrapper[4687]: I0131 06:48:46.381355 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 06:48:46 crc kubenswrapper[4687]: I0131 06:48:46.778644 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 06:48:46 crc kubenswrapper[4687]: I0131 06:48:46.914014 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 06:48:47 crc kubenswrapper[4687]: I0131 06:48:47.603680 4687 scope.go:117] "RemoveContainer" containerID="2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156" Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.635865 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.681030 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.746993 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.773707 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.971403 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c27wp_175a043a-d6f7-4c39-953b-560986f36646/marketplace-operator/1.log" Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.971497 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" event={"ID":"175a043a-d6f7-4c39-953b-560986f36646","Type":"ContainerStarted","Data":"209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d"} Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.971999 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.975120 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 06:48:48 crc kubenswrapper[4687]: I0131 06:48:48.975168 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:48:49 crc kubenswrapper[4687]: I0131 06:48:49.558471 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 06:48:49 crc kubenswrapper[4687]: I0131 06:48:49.828807 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 06:48:50 crc kubenswrapper[4687]: I0131 06:48:50.391795 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 06:48:50 crc kubenswrapper[4687]: I0131 06:48:50.430786 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 06:48:51 crc kubenswrapper[4687]: I0131 06:48:51.131790 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 06:48:52 crc kubenswrapper[4687]: I0131 06:48:52.134657 4687 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 06:48:52 crc kubenswrapper[4687]: I0131 06:48:52.134980 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758" gracePeriod=5 Jan 31 06:48:52 crc kubenswrapper[4687]: I0131 06:48:52.210392 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 06:48:52 crc kubenswrapper[4687]: I0131 06:48:52.295554 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.727157 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.727701 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856440 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856571 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856591 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856631 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856699 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856729 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856751 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.856846 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.857691 4687 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.857719 4687 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.857736 4687 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.857746 4687 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.864798 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:48:57 crc kubenswrapper[4687]: I0131 06:48:57.958744 4687 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:48:58 crc kubenswrapper[4687]: I0131 06:48:58.029995 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 06:48:58 crc kubenswrapper[4687]: I0131 06:48:58.030052 4687 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758" exitCode=137 Jan 31 06:48:58 crc kubenswrapper[4687]: I0131 06:48:58.030105 4687 scope.go:117] "RemoveContainer" containerID="1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758" Jan 31 06:48:58 crc kubenswrapper[4687]: I0131 06:48:58.030159 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 06:48:58 crc kubenswrapper[4687]: I0131 06:48:58.047727 4687 scope.go:117] "RemoveContainer" containerID="1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758" Jan 31 06:48:58 crc kubenswrapper[4687]: E0131 06:48:58.048216 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758\": container with ID starting with 1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758 not found: ID does not exist" containerID="1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758" Jan 31 06:48:58 crc kubenswrapper[4687]: I0131 06:48:58.048266 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758"} err="failed to get container status \"1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758\": rpc error: code = NotFound desc = could not find container \"1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758\": container with ID starting with 1239726cfad6b5a466c1ed70b6a1fb4d6c62a4b7eba460918770d294df419758 not found: ID does not exist" Jan 31 06:48:58 crc kubenswrapper[4687]: I0131 06:48:58.684002 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:48:58 crc kubenswrapper[4687]: I0131 06:48:58.684105 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:48:59 crc kubenswrapper[4687]: I0131 06:48:59.609733 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 06:48:59 crc kubenswrapper[4687]: I0131 06:48:59.610717 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 31 06:48:59 crc kubenswrapper[4687]: I0131 06:48:59.622210 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 06:48:59 crc kubenswrapper[4687]: I0131 06:48:59.622260 4687 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="7225cc9e-66e8-4cd8-90ed-af70836ab4e7" Jan 31 06:48:59 crc kubenswrapper[4687]: I0131 06:48:59.626091 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 06:48:59 crc kubenswrapper[4687]: I0131 06:48:59.626120 4687 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="7225cc9e-66e8-4cd8-90ed-af70836ab4e7" Jan 31 06:49:17 crc kubenswrapper[4687]: I0131 06:49:17.674122 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8qhsc"] Jan 31 06:49:17 crc kubenswrapper[4687]: I0131 06:49:17.674931 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" podUID="e2e4841e-e880-45f4-8769-cd9fea35654e" containerName="controller-manager" containerID="cri-o://ba1d85580ab924a257e43454bb75eb445d26ea79fdd905a2daf33edcba72c19e" gracePeriod=30 Jan 31 06:49:17 crc kubenswrapper[4687]: I0131 06:49:17.765179 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z"] Jan 31 06:49:17 crc kubenswrapper[4687]: I0131 06:49:17.765401 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" podUID="65572f7a-260e-4d12-b9ad-e17f1b17eab4" containerName="route-controller-manager" containerID="cri-o://ac3ae5422bf890f9d59028d983f7728ae5eadb459b5c6c4efa88116d4de8795b" gracePeriod=30 Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.138962 4687 generic.go:334] "Generic (PLEG): container finished" podID="e2e4841e-e880-45f4-8769-cd9fea35654e" containerID="ba1d85580ab924a257e43454bb75eb445d26ea79fdd905a2daf33edcba72c19e" exitCode=0 Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.139044 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" event={"ID":"e2e4841e-e880-45f4-8769-cd9fea35654e","Type":"ContainerDied","Data":"ba1d85580ab924a257e43454bb75eb445d26ea79fdd905a2daf33edcba72c19e"} Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.141079 4687 generic.go:334] "Generic (PLEG): container finished" podID="65572f7a-260e-4d12-b9ad-e17f1b17eab4" containerID="ac3ae5422bf890f9d59028d983f7728ae5eadb459b5c6c4efa88116d4de8795b" exitCode=0 Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.141112 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" event={"ID":"65572f7a-260e-4d12-b9ad-e17f1b17eab4","Type":"ContainerDied","Data":"ac3ae5422bf890f9d59028d983f7728ae5eadb459b5c6c4efa88116d4de8795b"} Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.141130 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" event={"ID":"65572f7a-260e-4d12-b9ad-e17f1b17eab4","Type":"ContainerDied","Data":"da4ffea02b55d3dc10851789a38cd8537cb690bd84dea778e25520a890a62c07"} Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.141142 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4ffea02b55d3dc10851789a38cd8537cb690bd84dea778e25520a890a62c07" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.145277 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.318320 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65572f7a-260e-4d12-b9ad-e17f1b17eab4-serving-cert\") pod \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.318436 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ctqc\" (UniqueName: \"kubernetes.io/projected/65572f7a-260e-4d12-b9ad-e17f1b17eab4-kube-api-access-7ctqc\") pod \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.318474 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-client-ca\") pod \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.318498 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-config\") pod \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\" (UID: \"65572f7a-260e-4d12-b9ad-e17f1b17eab4\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.319326 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-config" (OuterVolumeSpecName: "config") pod "65572f7a-260e-4d12-b9ad-e17f1b17eab4" (UID: "65572f7a-260e-4d12-b9ad-e17f1b17eab4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.321722 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-client-ca" (OuterVolumeSpecName: "client-ca") pod "65572f7a-260e-4d12-b9ad-e17f1b17eab4" (UID: "65572f7a-260e-4d12-b9ad-e17f1b17eab4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.324494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65572f7a-260e-4d12-b9ad-e17f1b17eab4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "65572f7a-260e-4d12-b9ad-e17f1b17eab4" (UID: "65572f7a-260e-4d12-b9ad-e17f1b17eab4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.331831 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65572f7a-260e-4d12-b9ad-e17f1b17eab4-kube-api-access-7ctqc" (OuterVolumeSpecName: "kube-api-access-7ctqc") pod "65572f7a-260e-4d12-b9ad-e17f1b17eab4" (UID: "65572f7a-260e-4d12-b9ad-e17f1b17eab4"). InnerVolumeSpecName "kube-api-access-7ctqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.420446 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ctqc\" (UniqueName: \"kubernetes.io/projected/65572f7a-260e-4d12-b9ad-e17f1b17eab4-kube-api-access-7ctqc\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.420494 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.420506 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65572f7a-260e-4d12-b9ad-e17f1b17eab4-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.420517 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65572f7a-260e-4d12-b9ad-e17f1b17eab4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.526737 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.622376 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e4841e-e880-45f4-8769-cd9fea35654e-serving-cert\") pod \"e2e4841e-e880-45f4-8769-cd9fea35654e\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.622546 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-config\") pod \"e2e4841e-e880-45f4-8769-cd9fea35654e\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.622569 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-proxy-ca-bundles\") pod \"e2e4841e-e880-45f4-8769-cd9fea35654e\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.622599 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx625\" (UniqueName: \"kubernetes.io/projected/e2e4841e-e880-45f4-8769-cd9fea35654e-kube-api-access-qx625\") pod \"e2e4841e-e880-45f4-8769-cd9fea35654e\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.622617 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-client-ca\") pod \"e2e4841e-e880-45f4-8769-cd9fea35654e\" (UID: \"e2e4841e-e880-45f4-8769-cd9fea35654e\") " Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.623184 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e2e4841e-e880-45f4-8769-cd9fea35654e" (UID: "e2e4841e-e880-45f4-8769-cd9fea35654e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.623431 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-client-ca" (OuterVolumeSpecName: "client-ca") pod "e2e4841e-e880-45f4-8769-cd9fea35654e" (UID: "e2e4841e-e880-45f4-8769-cd9fea35654e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.623469 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-config" (OuterVolumeSpecName: "config") pod "e2e4841e-e880-45f4-8769-cd9fea35654e" (UID: "e2e4841e-e880-45f4-8769-cd9fea35654e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.623709 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.623725 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.623734 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e4841e-e880-45f4-8769-cd9fea35654e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.625790 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e4841e-e880-45f4-8769-cd9fea35654e-kube-api-access-qx625" (OuterVolumeSpecName: "kube-api-access-qx625") pod "e2e4841e-e880-45f4-8769-cd9fea35654e" (UID: "e2e4841e-e880-45f4-8769-cd9fea35654e"). InnerVolumeSpecName "kube-api-access-qx625". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.625938 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e4841e-e880-45f4-8769-cd9fea35654e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e2e4841e-e880-45f4-8769-cd9fea35654e" (UID: "e2e4841e-e880-45f4-8769-cd9fea35654e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.724546 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx625\" (UniqueName: \"kubernetes.io/projected/e2e4841e-e880-45f4-8769-cd9fea35654e-kube-api-access-qx625\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:18 crc kubenswrapper[4687]: I0131 06:49:18.724593 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e4841e-e880-45f4-8769-cd9fea35654e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.147613 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.147605 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" event={"ID":"e2e4841e-e880-45f4-8769-cd9fea35654e","Type":"ContainerDied","Data":"d0c64483a4d7a502db042097e6ee4f877efc5354e3a1ec89823097f9eb096e78"} Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.147656 4687 scope.go:117] "RemoveContainer" containerID="ba1d85580ab924a257e43454bb75eb445d26ea79fdd905a2daf33edcba72c19e" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.147614 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8qhsc" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.179512 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8qhsc"] Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.184190 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8qhsc"] Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.188012 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z"] Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.191070 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fc67z"] Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.612839 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65572f7a-260e-4d12-b9ad-e17f1b17eab4" path="/var/lib/kubelet/pods/65572f7a-260e-4d12-b9ad-e17f1b17eab4/volumes" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.614225 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e4841e-e880-45f4-8769-cd9fea35654e" path="/var/lib/kubelet/pods/e2e4841e-e880-45f4-8769-cd9fea35654e/volumes" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.617831 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh"] Jan 31 06:49:19 crc kubenswrapper[4687]: E0131 06:49:19.618176 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e4841e-e880-45f4-8769-cd9fea35654e" containerName="controller-manager" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.618202 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e4841e-e880-45f4-8769-cd9fea35654e" containerName="controller-manager" Jan 31 06:49:19 crc kubenswrapper[4687]: E0131 06:49:19.618228 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65572f7a-260e-4d12-b9ad-e17f1b17eab4" containerName="route-controller-manager" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.618238 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="65572f7a-260e-4d12-b9ad-e17f1b17eab4" containerName="route-controller-manager" Jan 31 06:49:19 crc kubenswrapper[4687]: E0131 06:49:19.618258 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" containerName="installer" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.618266 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" containerName="installer" Jan 31 06:49:19 crc kubenswrapper[4687]: E0131 06:49:19.618279 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.618286 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.618447 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7f209e-3f0c-4092-b1c9-9d5fe27dfb29" containerName="installer" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.618470 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e4841e-e880-45f4-8769-cd9fea35654e" containerName="controller-manager" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.618485 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="65572f7a-260e-4d12-b9ad-e17f1b17eab4" containerName="route-controller-manager" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.618497 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.619059 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.622898 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w"] Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.624078 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.624104 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.624123 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.624201 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.624465 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.624717 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.625023 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.628266 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.628363 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.628574 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.628965 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.629215 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.630111 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh"] Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.630155 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.634734 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w"] Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.638334 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.736815 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcmx4\" (UniqueName: \"kubernetes.io/projected/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-kube-api-access-gcmx4\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.736895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bcpl\" (UniqueName: \"kubernetes.io/projected/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-kube-api-access-8bcpl\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.736962 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-client-ca\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.736984 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-serving-cert\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.737010 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-client-ca\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.737036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-config\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.737207 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-config\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.737489 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-serving-cert\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.737556 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-proxy-ca-bundles\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839309 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bcpl\" (UniqueName: \"kubernetes.io/projected/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-kube-api-access-8bcpl\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839447 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-client-ca\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839493 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-serving-cert\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-client-ca\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839576 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-config\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-config\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-proxy-ca-bundles\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839793 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-serving-cert\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.839841 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcmx4\" (UniqueName: \"kubernetes.io/projected/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-kube-api-access-gcmx4\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.841040 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-config\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.841378 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-client-ca\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.842596 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-client-ca\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.842799 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-config\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.843593 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-proxy-ca-bundles\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.843686 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-serving-cert\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.844571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-serving-cert\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.856257 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bcpl\" (UniqueName: \"kubernetes.io/projected/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-kube-api-access-8bcpl\") pod \"controller-manager-6b64cbf5d9-zgb9w\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.862503 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcmx4\" (UniqueName: \"kubernetes.io/projected/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-kube-api-access-gcmx4\") pod \"route-controller-manager-75c8d44cbc-ld2qh\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.941042 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:19 crc kubenswrapper[4687]: I0131 06:49:19.948018 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:20 crc kubenswrapper[4687]: I0131 06:49:20.188007 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh"] Jan 31 06:49:20 crc kubenswrapper[4687]: I0131 06:49:20.345719 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w"] Jan 31 06:49:20 crc kubenswrapper[4687]: W0131 06:49:20.353784 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a6ec4c1_a1a2_45d2_9c24_bfdff8e7726c.slice/crio-eef3c6bd9b15da3cc4689e73a8fe400a68364a98c660be048d5d0660ee07cf96 WatchSource:0}: Error finding container eef3c6bd9b15da3cc4689e73a8fe400a68364a98c660be048d5d0660ee07cf96: Status 404 returned error can't find the container with id eef3c6bd9b15da3cc4689e73a8fe400a68364a98c660be048d5d0660ee07cf96 Jan 31 06:49:20 crc kubenswrapper[4687]: I0131 06:49:20.662241 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w"] Jan 31 06:49:20 crc kubenswrapper[4687]: I0131 06:49:20.693428 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh"] Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.165128 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" event={"ID":"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c","Type":"ContainerStarted","Data":"8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757"} Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.165944 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.165969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" event={"ID":"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c","Type":"ContainerStarted","Data":"eef3c6bd9b15da3cc4689e73a8fe400a68364a98c660be048d5d0660ee07cf96"} Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.166741 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" event={"ID":"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3","Type":"ContainerStarted","Data":"5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc"} Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.166781 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" event={"ID":"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3","Type":"ContainerStarted","Data":"e1a1a37380c25d62a7c599791ba3b71734508128347c9fe131aa03ae9c6ce16c"} Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.167484 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.171168 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.172923 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.186305 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" podStartSLOduration=4.18628725 podStartE2EDuration="4.18628725s" podCreationTimestamp="2026-01-31 06:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:49:21.18482868 +0000 UTC m=+387.462088245" watchObservedRunningTime="2026-01-31 06:49:21.18628725 +0000 UTC m=+387.463546825" Jan 31 06:49:21 crc kubenswrapper[4687]: I0131 06:49:21.279444 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" podStartSLOduration=4.279405992 podStartE2EDuration="4.279405992s" podCreationTimestamp="2026-01-31 06:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:49:21.275148306 +0000 UTC m=+387.552407881" watchObservedRunningTime="2026-01-31 06:49:21.279405992 +0000 UTC m=+387.556665567" Jan 31 06:49:22 crc kubenswrapper[4687]: I0131 06:49:22.170743 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" podUID="81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" containerName="route-controller-manager" containerID="cri-o://5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc" gracePeriod=30 Jan 31 06:49:22 crc kubenswrapper[4687]: I0131 06:49:22.170787 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" podUID="3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" containerName="controller-manager" containerID="cri-o://8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757" gracePeriod=30 Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.076453 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.111643 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k"] Jan 31 06:49:23 crc kubenswrapper[4687]: E0131 06:49:23.111871 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" containerName="route-controller-manager" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.111886 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" containerName="route-controller-manager" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.111986 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" containerName="route-controller-manager" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.112838 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.125045 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k"] Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.145067 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.187143 4687 generic.go:334] "Generic (PLEG): container finished" podID="3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" containerID="8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757" exitCode=0 Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.187215 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" event={"ID":"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c","Type":"ContainerDied","Data":"8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757"} Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.187242 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" event={"ID":"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c","Type":"ContainerDied","Data":"eef3c6bd9b15da3cc4689e73a8fe400a68364a98c660be048d5d0660ee07cf96"} Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.187261 4687 scope.go:117] "RemoveContainer" containerID="8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.188593 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.189133 4687 generic.go:334] "Generic (PLEG): container finished" podID="81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" containerID="5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc" exitCode=0 Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.189164 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" event={"ID":"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3","Type":"ContainerDied","Data":"5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc"} Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.189184 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" event={"ID":"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3","Type":"ContainerDied","Data":"e1a1a37380c25d62a7c599791ba3b71734508128347c9fe131aa03ae9c6ce16c"} Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.189208 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.190193 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-serving-cert\") pod \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.190332 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-config\") pod \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.190426 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-client-ca\") pod \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.190454 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcmx4\" (UniqueName: \"kubernetes.io/projected/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-kube-api-access-gcmx4\") pod \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\" (UID: \"81a5dfe5-faee-4b77-abe7-fe018e8ae3c3\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.191160 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-client-ca" (OuterVolumeSpecName: "client-ca") pod "81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" (UID: "81a5dfe5-faee-4b77-abe7-fe018e8ae3c3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.191325 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-config" (OuterVolumeSpecName: "config") pod "81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" (UID: "81a5dfe5-faee-4b77-abe7-fe018e8ae3c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.191725 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.191748 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.195689 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" (UID: "81a5dfe5-faee-4b77-abe7-fe018e8ae3c3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.195898 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-kube-api-access-gcmx4" (OuterVolumeSpecName: "kube-api-access-gcmx4") pod "81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" (UID: "81a5dfe5-faee-4b77-abe7-fe018e8ae3c3"). InnerVolumeSpecName "kube-api-access-gcmx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.211301 4687 scope.go:117] "RemoveContainer" containerID="8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757" Jan 31 06:49:23 crc kubenswrapper[4687]: E0131 06:49:23.211775 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757\": container with ID starting with 8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757 not found: ID does not exist" containerID="8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.211811 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757"} err="failed to get container status \"8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757\": rpc error: code = NotFound desc = could not find container \"8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757\": container with ID starting with 8561456b8249de9135849307f62d9864d2e9fcdc4434d07cbc52152da7804757 not found: ID does not exist" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.211834 4687 scope.go:117] "RemoveContainer" containerID="5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.230882 4687 scope.go:117] "RemoveContainer" containerID="5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc" Jan 31 06:49:23 crc kubenswrapper[4687]: E0131 06:49:23.231366 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc\": container with ID starting with 5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc not found: ID does not exist" containerID="5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.231400 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc"} err="failed to get container status \"5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc\": rpc error: code = NotFound desc = could not find container \"5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc\": container with ID starting with 5ec5d81c0a2be54e4d1f3fc1391a86fa111036a877d5ea8fc491b698804b3acc not found: ID does not exist" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.292812 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-serving-cert\") pod \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.292849 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bcpl\" (UniqueName: \"kubernetes.io/projected/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-kube-api-access-8bcpl\") pod \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.292871 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-client-ca\") pod \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.292900 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-config\") pod \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.292976 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-proxy-ca-bundles\") pod \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\" (UID: \"3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c\") " Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293176 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-client-ca\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293196 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8082316f-37ba-497f-91a2-955cc9858f46-serving-cert\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293221 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-config\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293239 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrcb7\" (UniqueName: \"kubernetes.io/projected/8082316f-37ba-497f-91a2-955cc9858f46-kube-api-access-vrcb7\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293295 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293305 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcmx4\" (UniqueName: \"kubernetes.io/projected/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3-kube-api-access-gcmx4\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293777 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" (UID: "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293958 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" (UID: "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.293986 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-config" (OuterVolumeSpecName: "config") pod "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" (UID: "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.295684 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-kube-api-access-8bcpl" (OuterVolumeSpecName: "kube-api-access-8bcpl") pod "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" (UID: "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c"). InnerVolumeSpecName "kube-api-access-8bcpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.296004 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" (UID: "3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.394436 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-client-ca\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.394518 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8082316f-37ba-497f-91a2-955cc9858f46-serving-cert\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.395739 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-client-ca\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.405406 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-config\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.405388 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-config\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.405560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrcb7\" (UniqueName: \"kubernetes.io/projected/8082316f-37ba-497f-91a2-955cc9858f46-kube-api-access-vrcb7\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.405671 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.405694 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.405716 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.405735 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.405753 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bcpl\" (UniqueName: \"kubernetes.io/projected/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c-kube-api-access-8bcpl\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.409110 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8082316f-37ba-497f-91a2-955cc9858f46-serving-cert\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.421947 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrcb7\" (UniqueName: \"kubernetes.io/projected/8082316f-37ba-497f-91a2-955cc9858f46-kube-api-access-vrcb7\") pod \"route-controller-manager-5dcdbd9666-kdw9k\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.442469 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.524543 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w"] Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.533707 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b64cbf5d9-zgb9w"] Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.549388 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh"] Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.550434 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75c8d44cbc-ld2qh"] Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.611327 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" path="/var/lib/kubelet/pods/3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c/volumes" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.611878 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a5dfe5-faee-4b77-abe7-fe018e8ae3c3" path="/var/lib/kubelet/pods/81a5dfe5-faee-4b77-abe7-fe018e8ae3c3/volumes" Jan 31 06:49:23 crc kubenswrapper[4687]: I0131 06:49:23.643059 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k"] Jan 31 06:49:23 crc kubenswrapper[4687]: W0131 06:49:23.649851 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8082316f_37ba_497f_91a2_955cc9858f46.slice/crio-58fabe21bc0479002eec2f0674b25417ce49d922aba6d91503fb968d99477ce8 WatchSource:0}: Error finding container 58fabe21bc0479002eec2f0674b25417ce49d922aba6d91503fb968d99477ce8: Status 404 returned error can't find the container with id 58fabe21bc0479002eec2f0674b25417ce49d922aba6d91503fb968d99477ce8 Jan 31 06:49:24 crc kubenswrapper[4687]: I0131 06:49:24.207232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" event={"ID":"8082316f-37ba-497f-91a2-955cc9858f46","Type":"ContainerStarted","Data":"ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170"} Jan 31 06:49:24 crc kubenswrapper[4687]: I0131 06:49:24.207583 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" event={"ID":"8082316f-37ba-497f-91a2-955cc9858f46","Type":"ContainerStarted","Data":"58fabe21bc0479002eec2f0674b25417ce49d922aba6d91503fb968d99477ce8"} Jan 31 06:49:24 crc kubenswrapper[4687]: I0131 06:49:24.223551 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" podStartSLOduration=4.2235339849999995 podStartE2EDuration="4.223533985s" podCreationTimestamp="2026-01-31 06:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:49:24.222566569 +0000 UTC m=+390.499826134" watchObservedRunningTime="2026-01-31 06:49:24.223533985 +0000 UTC m=+390.500793550" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.213898 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.218987 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.625185 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fb864b4d-tmlgf"] Jan 31 06:49:25 crc kubenswrapper[4687]: E0131 06:49:25.625862 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" containerName="controller-manager" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.625886 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" containerName="controller-manager" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.626076 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6ec4c1-a1a2-45d2-9c24-bfdff8e7726c" containerName="controller-manager" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.628177 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.631105 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.631334 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.631492 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.631624 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.631833 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.632047 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.635680 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9657a63-fe0e-4f75-94bc-6bdf091db267-serving-cert\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.635755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-config\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.635790 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-proxy-ca-bundles\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.635853 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgd75\" (UniqueName: \"kubernetes.io/projected/c9657a63-fe0e-4f75-94bc-6bdf091db267-kube-api-access-bgd75\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.635902 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-client-ca\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.645020 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.646449 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fb864b4d-tmlgf"] Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.737236 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgd75\" (UniqueName: \"kubernetes.io/projected/c9657a63-fe0e-4f75-94bc-6bdf091db267-kube-api-access-bgd75\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.737314 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-client-ca\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.737379 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9657a63-fe0e-4f75-94bc-6bdf091db267-serving-cert\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.737436 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-config\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.737463 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-proxy-ca-bundles\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.738923 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-client-ca\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.739870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-proxy-ca-bundles\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.740174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-config\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.748715 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9657a63-fe0e-4f75-94bc-6bdf091db267-serving-cert\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.757724 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgd75\" (UniqueName: \"kubernetes.io/projected/c9657a63-fe0e-4f75-94bc-6bdf091db267-kube-api-access-bgd75\") pod \"controller-manager-fb864b4d-tmlgf\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:25 crc kubenswrapper[4687]: I0131 06:49:25.956175 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:26 crc kubenswrapper[4687]: I0131 06:49:26.147284 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fb864b4d-tmlgf"] Jan 31 06:49:26 crc kubenswrapper[4687]: I0131 06:49:26.220725 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" event={"ID":"c9657a63-fe0e-4f75-94bc-6bdf091db267","Type":"ContainerStarted","Data":"194d10fb9ee3df6b2dd0a345ebbddc5ac0b9215afe9fd91e71ddf808ad1feb3a"} Jan 31 06:49:27 crc kubenswrapper[4687]: I0131 06:49:27.228039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" event={"ID":"c9657a63-fe0e-4f75-94bc-6bdf091db267","Type":"ContainerStarted","Data":"f979234129d97f8819cdca1ea3f010bdc7dcf147efb73da2c2d13aab91823ff9"} Jan 31 06:49:27 crc kubenswrapper[4687]: I0131 06:49:27.228443 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:27 crc kubenswrapper[4687]: I0131 06:49:27.240400 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:27 crc kubenswrapper[4687]: I0131 06:49:27.252672 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" podStartSLOduration=7.252652539 podStartE2EDuration="7.252652539s" podCreationTimestamp="2026-01-31 06:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:49:27.251394124 +0000 UTC m=+393.528653719" watchObservedRunningTime="2026-01-31 06:49:27.252652539 +0000 UTC m=+393.529912124" Jan 31 06:49:28 crc kubenswrapper[4687]: I0131 06:49:28.684090 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:49:28 crc kubenswrapper[4687]: I0131 06:49:28.684155 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:49:37 crc kubenswrapper[4687]: I0131 06:49:37.665572 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fb864b4d-tmlgf"] Jan 31 06:49:37 crc kubenswrapper[4687]: I0131 06:49:37.666339 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" podUID="c9657a63-fe0e-4f75-94bc-6bdf091db267" containerName="controller-manager" containerID="cri-o://f979234129d97f8819cdca1ea3f010bdc7dcf147efb73da2c2d13aab91823ff9" gracePeriod=30 Jan 31 06:49:37 crc kubenswrapper[4687]: I0131 06:49:37.677213 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k"] Jan 31 06:49:37 crc kubenswrapper[4687]: I0131 06:49:37.677478 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" podUID="8082316f-37ba-497f-91a2-955cc9858f46" containerName="route-controller-manager" containerID="cri-o://ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170" gracePeriod=30 Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.264731 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.288804 4687 generic.go:334] "Generic (PLEG): container finished" podID="c9657a63-fe0e-4f75-94bc-6bdf091db267" containerID="f979234129d97f8819cdca1ea3f010bdc7dcf147efb73da2c2d13aab91823ff9" exitCode=0 Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.289052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" event={"ID":"c9657a63-fe0e-4f75-94bc-6bdf091db267","Type":"ContainerDied","Data":"f979234129d97f8819cdca1ea3f010bdc7dcf147efb73da2c2d13aab91823ff9"} Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.290502 4687 generic.go:334] "Generic (PLEG): container finished" podID="8082316f-37ba-497f-91a2-955cc9858f46" containerID="ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170" exitCode=0 Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.290536 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" event={"ID":"8082316f-37ba-497f-91a2-955cc9858f46","Type":"ContainerDied","Data":"ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170"} Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.290567 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.290587 4687 scope.go:117] "RemoveContainer" containerID="ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.290572 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k" event={"ID":"8082316f-37ba-497f-91a2-955cc9858f46","Type":"ContainerDied","Data":"58fabe21bc0479002eec2f0674b25417ce49d922aba6d91503fb968d99477ce8"} Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.311713 4687 scope.go:117] "RemoveContainer" containerID="ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.311747 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrcb7\" (UniqueName: \"kubernetes.io/projected/8082316f-37ba-497f-91a2-955cc9858f46-kube-api-access-vrcb7\") pod \"8082316f-37ba-497f-91a2-955cc9858f46\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.311788 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8082316f-37ba-497f-91a2-955cc9858f46-serving-cert\") pod \"8082316f-37ba-497f-91a2-955cc9858f46\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.311809 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-client-ca\") pod \"8082316f-37ba-497f-91a2-955cc9858f46\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.311892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-config\") pod \"8082316f-37ba-497f-91a2-955cc9858f46\" (UID: \"8082316f-37ba-497f-91a2-955cc9858f46\") " Jan 31 06:49:38 crc kubenswrapper[4687]: E0131 06:49:38.312208 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170\": container with ID starting with ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170 not found: ID does not exist" containerID="ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.312241 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170"} err="failed to get container status \"ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170\": rpc error: code = NotFound desc = could not find container \"ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170\": container with ID starting with ea519bacd66d097c36d1b31cf19435d3f1159e283d2dbe37e64bd1916527c170 not found: ID does not exist" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.312835 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-client-ca" (OuterVolumeSpecName: "client-ca") pod "8082316f-37ba-497f-91a2-955cc9858f46" (UID: "8082316f-37ba-497f-91a2-955cc9858f46"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.312936 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-config" (OuterVolumeSpecName: "config") pod "8082316f-37ba-497f-91a2-955cc9858f46" (UID: "8082316f-37ba-497f-91a2-955cc9858f46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.317182 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8082316f-37ba-497f-91a2-955cc9858f46-kube-api-access-vrcb7" (OuterVolumeSpecName: "kube-api-access-vrcb7") pod "8082316f-37ba-497f-91a2-955cc9858f46" (UID: "8082316f-37ba-497f-91a2-955cc9858f46"). InnerVolumeSpecName "kube-api-access-vrcb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.332630 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8082316f-37ba-497f-91a2-955cc9858f46-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8082316f-37ba-497f-91a2-955cc9858f46" (UID: "8082316f-37ba-497f-91a2-955cc9858f46"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.354832 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.413129 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgd75\" (UniqueName: \"kubernetes.io/projected/c9657a63-fe0e-4f75-94bc-6bdf091db267-kube-api-access-bgd75\") pod \"c9657a63-fe0e-4f75-94bc-6bdf091db267\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.413171 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-client-ca\") pod \"c9657a63-fe0e-4f75-94bc-6bdf091db267\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.413193 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-config\") pod \"c9657a63-fe0e-4f75-94bc-6bdf091db267\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.413236 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-proxy-ca-bundles\") pod \"c9657a63-fe0e-4f75-94bc-6bdf091db267\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.413300 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9657a63-fe0e-4f75-94bc-6bdf091db267-serving-cert\") pod \"c9657a63-fe0e-4f75-94bc-6bdf091db267\" (UID: \"c9657a63-fe0e-4f75-94bc-6bdf091db267\") " Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.414137 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c9657a63-fe0e-4f75-94bc-6bdf091db267" (UID: "c9657a63-fe0e-4f75-94bc-6bdf091db267"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.414259 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-client-ca" (OuterVolumeSpecName: "client-ca") pod "c9657a63-fe0e-4f75-94bc-6bdf091db267" (UID: "c9657a63-fe0e-4f75-94bc-6bdf091db267"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.414262 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-config" (OuterVolumeSpecName: "config") pod "c9657a63-fe0e-4f75-94bc-6bdf091db267" (UID: "c9657a63-fe0e-4f75-94bc-6bdf091db267"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.414960 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8082316f-37ba-497f-91a2-955cc9858f46-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.414994 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.415012 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.415034 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8082316f-37ba-497f-91a2-955cc9858f46-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.415053 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.415070 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9657a63-fe0e-4f75-94bc-6bdf091db267-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.415088 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrcb7\" (UniqueName: \"kubernetes.io/projected/8082316f-37ba-497f-91a2-955cc9858f46-kube-api-access-vrcb7\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.417598 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9657a63-fe0e-4f75-94bc-6bdf091db267-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c9657a63-fe0e-4f75-94bc-6bdf091db267" (UID: "c9657a63-fe0e-4f75-94bc-6bdf091db267"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.417605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9657a63-fe0e-4f75-94bc-6bdf091db267-kube-api-access-bgd75" (OuterVolumeSpecName: "kube-api-access-bgd75") pod "c9657a63-fe0e-4f75-94bc-6bdf091db267" (UID: "c9657a63-fe0e-4f75-94bc-6bdf091db267"). InnerVolumeSpecName "kube-api-access-bgd75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.516462 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9657a63-fe0e-4f75-94bc-6bdf091db267-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.516512 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgd75\" (UniqueName: \"kubernetes.io/projected/c9657a63-fe0e-4f75-94bc-6bdf091db267-kube-api-access-bgd75\") on node \"crc\" DevicePath \"\"" Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.622627 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k"] Jan 31 06:49:38 crc kubenswrapper[4687]: I0131 06:49:38.627780 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcdbd9666-kdw9k"] Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.299347 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" event={"ID":"c9657a63-fe0e-4f75-94bc-6bdf091db267","Type":"ContainerDied","Data":"194d10fb9ee3df6b2dd0a345ebbddc5ac0b9215afe9fd91e71ddf808ad1feb3a"} Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.300709 4687 scope.go:117] "RemoveContainer" containerID="f979234129d97f8819cdca1ea3f010bdc7dcf147efb73da2c2d13aab91823ff9" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.299379 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fb864b4d-tmlgf" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.332933 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fb864b4d-tmlgf"] Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.338041 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-fb864b4d-tmlgf"] Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.609867 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8082316f-37ba-497f-91a2-955cc9858f46" path="/var/lib/kubelet/pods/8082316f-37ba-497f-91a2-955cc9858f46/volumes" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.610369 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9657a63-fe0e-4f75-94bc-6bdf091db267" path="/var/lib/kubelet/pods/c9657a63-fe0e-4f75-94bc-6bdf091db267/volumes" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.633310 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp"] Jan 31 06:49:39 crc kubenswrapper[4687]: E0131 06:49:39.633586 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9657a63-fe0e-4f75-94bc-6bdf091db267" containerName="controller-manager" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.633601 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9657a63-fe0e-4f75-94bc-6bdf091db267" containerName="controller-manager" Jan 31 06:49:39 crc kubenswrapper[4687]: E0131 06:49:39.633615 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8082316f-37ba-497f-91a2-955cc9858f46" containerName="route-controller-manager" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.633621 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8082316f-37ba-497f-91a2-955cc9858f46" containerName="route-controller-manager" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.633710 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9657a63-fe0e-4f75-94bc-6bdf091db267" containerName="controller-manager" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.633719 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8082316f-37ba-497f-91a2-955cc9858f46" containerName="route-controller-manager" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.634164 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.636226 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl"] Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.636579 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.636883 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.637700 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.637892 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.637999 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.640256 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.640274 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.640432 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.640510 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.640699 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.641168 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.641707 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.641756 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.650940 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.652173 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp"] Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.662694 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl"] Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728187 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-client-ca\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728243 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2p4j\" (UniqueName: \"kubernetes.io/projected/56425ac6-083d-4048-8eac-9e8a0beaac76-kube-api-access-g2p4j\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728376 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5715d7e4-5858-4146-a930-5c856cb301d6-serving-cert\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728463 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-client-ca\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728572 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56425ac6-083d-4048-8eac-9e8a0beaac76-serving-cert\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-config\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728639 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-proxy-ca-bundles\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-config\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.728680 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgx7\" (UniqueName: \"kubernetes.io/projected/5715d7e4-5858-4146-a930-5c856cb301d6-kube-api-access-9jgx7\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.830778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-client-ca\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.830908 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56425ac6-083d-4048-8eac-9e8a0beaac76-serving-cert\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.830948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-config\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.830971 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-proxy-ca-bundles\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.830995 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-config\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.831016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgx7\" (UniqueName: \"kubernetes.io/projected/5715d7e4-5858-4146-a930-5c856cb301d6-kube-api-access-9jgx7\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.831058 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-client-ca\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.831083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2p4j\" (UniqueName: \"kubernetes.io/projected/56425ac6-083d-4048-8eac-9e8a0beaac76-kube-api-access-g2p4j\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.831113 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5715d7e4-5858-4146-a930-5c856cb301d6-serving-cert\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.832633 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-client-ca\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.832673 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-config\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.833863 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-proxy-ca-bundles\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.835005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-client-ca\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.836757 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-config\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.838308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5715d7e4-5858-4146-a930-5c856cb301d6-serving-cert\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.838362 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56425ac6-083d-4048-8eac-9e8a0beaac76-serving-cert\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.853481 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2p4j\" (UniqueName: \"kubernetes.io/projected/56425ac6-083d-4048-8eac-9e8a0beaac76-kube-api-access-g2p4j\") pod \"controller-manager-7f55cfb95b-7gvkp\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.854152 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgx7\" (UniqueName: \"kubernetes.io/projected/5715d7e4-5858-4146-a930-5c856cb301d6-kube-api-access-9jgx7\") pod \"route-controller-manager-d5f57dc99-8v6cl\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.957149 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:39 crc kubenswrapper[4687]: I0131 06:49:39.967597 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:40 crc kubenswrapper[4687]: I0131 06:49:40.134646 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp"] Jan 31 06:49:40 crc kubenswrapper[4687]: I0131 06:49:40.179962 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl"] Jan 31 06:49:40 crc kubenswrapper[4687]: W0131 06:49:40.185382 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5715d7e4_5858_4146_a930_5c856cb301d6.slice/crio-7c89098ee3f76f496c14f24b0008e2277135bc48235cb9a60c28d33030bb23e4 WatchSource:0}: Error finding container 7c89098ee3f76f496c14f24b0008e2277135bc48235cb9a60c28d33030bb23e4: Status 404 returned error can't find the container with id 7c89098ee3f76f496c14f24b0008e2277135bc48235cb9a60c28d33030bb23e4 Jan 31 06:49:40 crc kubenswrapper[4687]: I0131 06:49:40.314726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" event={"ID":"5715d7e4-5858-4146-a930-5c856cb301d6","Type":"ContainerStarted","Data":"7c89098ee3f76f496c14f24b0008e2277135bc48235cb9a60c28d33030bb23e4"} Jan 31 06:49:40 crc kubenswrapper[4687]: I0131 06:49:40.316988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" event={"ID":"56425ac6-083d-4048-8eac-9e8a0beaac76","Type":"ContainerStarted","Data":"de8f540d6a07b1bcddb834efe9d2ce171d39a5acaa8c2036fe20a5c651df0b37"} Jan 31 06:49:41 crc kubenswrapper[4687]: I0131 06:49:41.321790 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" event={"ID":"56425ac6-083d-4048-8eac-9e8a0beaac76","Type":"ContainerStarted","Data":"242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a"} Jan 31 06:49:41 crc kubenswrapper[4687]: I0131 06:49:41.323270 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:41 crc kubenswrapper[4687]: I0131 06:49:41.324450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" event={"ID":"5715d7e4-5858-4146-a930-5c856cb301d6","Type":"ContainerStarted","Data":"e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03"} Jan 31 06:49:41 crc kubenswrapper[4687]: I0131 06:49:41.325053 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:41 crc kubenswrapper[4687]: I0131 06:49:41.328952 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:49:41 crc kubenswrapper[4687]: I0131 06:49:41.331346 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:49:41 crc kubenswrapper[4687]: I0131 06:49:41.343089 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" podStartSLOduration=4.34306471 podStartE2EDuration="4.34306471s" podCreationTimestamp="2026-01-31 06:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:49:41.337900739 +0000 UTC m=+407.615160314" watchObservedRunningTime="2026-01-31 06:49:41.34306471 +0000 UTC m=+407.620324285" Jan 31 06:49:41 crc kubenswrapper[4687]: I0131 06:49:41.380694 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" podStartSLOduration=4.380675977 podStartE2EDuration="4.380675977s" podCreationTimestamp="2026-01-31 06:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:49:41.377767397 +0000 UTC m=+407.655026962" watchObservedRunningTime="2026-01-31 06:49:41.380675977 +0000 UTC m=+407.657935552" Jan 31 06:49:58 crc kubenswrapper[4687]: I0131 06:49:58.684613 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:49:58 crc kubenswrapper[4687]: I0131 06:49:58.685144 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:49:58 crc kubenswrapper[4687]: I0131 06:49:58.685193 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:49:58 crc kubenswrapper[4687]: I0131 06:49:58.685803 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fae6440a00bffd2c9912563b3a0133e343e7e89f2c4e7a9ccaeea3baa2211238"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:49:58 crc kubenswrapper[4687]: I0131 06:49:58.685853 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://fae6440a00bffd2c9912563b3a0133e343e7e89f2c4e7a9ccaeea3baa2211238" gracePeriod=600 Jan 31 06:49:59 crc kubenswrapper[4687]: I0131 06:49:59.426992 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="fae6440a00bffd2c9912563b3a0133e343e7e89f2c4e7a9ccaeea3baa2211238" exitCode=0 Jan 31 06:49:59 crc kubenswrapper[4687]: I0131 06:49:59.427075 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"fae6440a00bffd2c9912563b3a0133e343e7e89f2c4e7a9ccaeea3baa2211238"} Jan 31 06:49:59 crc kubenswrapper[4687]: I0131 06:49:59.427533 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"f5db07448c568d30be8d0035977d79c95df6569fda6354ccd5bf27d59ac84ac4"} Jan 31 06:49:59 crc kubenswrapper[4687]: I0131 06:49:59.427555 4687 scope.go:117] "RemoveContainer" containerID="abfba9b7f58665b0c0568327f4362bb4f777dfe5a21c9de0e5795deac7c5120a" Jan 31 06:50:14 crc kubenswrapper[4687]: I0131 06:50:14.985358 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6qn9w"] Jan 31 06:50:17 crc kubenswrapper[4687]: I0131 06:50:17.672986 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp"] Jan 31 06:50:17 crc kubenswrapper[4687]: I0131 06:50:17.673682 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" podUID="56425ac6-083d-4048-8eac-9e8a0beaac76" containerName="controller-manager" containerID="cri-o://242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a" gracePeriod=30 Jan 31 06:50:17 crc kubenswrapper[4687]: I0131 06:50:17.769201 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl"] Jan 31 06:50:17 crc kubenswrapper[4687]: I0131 06:50:17.769402 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" podUID="5715d7e4-5858-4146-a930-5c856cb301d6" containerName="route-controller-manager" containerID="cri-o://e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03" gracePeriod=30 Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.055970 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.137994 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221492 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5715d7e4-5858-4146-a930-5c856cb301d6-serving-cert\") pod \"5715d7e4-5858-4146-a930-5c856cb301d6\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221560 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-proxy-ca-bundles\") pod \"56425ac6-083d-4048-8eac-9e8a0beaac76\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221624 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-client-ca\") pod \"56425ac6-083d-4048-8eac-9e8a0beaac76\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221656 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-config\") pod \"56425ac6-083d-4048-8eac-9e8a0beaac76\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221698 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-client-ca\") pod \"5715d7e4-5858-4146-a930-5c856cb301d6\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221725 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56425ac6-083d-4048-8eac-9e8a0beaac76-serving-cert\") pod \"56425ac6-083d-4048-8eac-9e8a0beaac76\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221766 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2p4j\" (UniqueName: \"kubernetes.io/projected/56425ac6-083d-4048-8eac-9e8a0beaac76-kube-api-access-g2p4j\") pod \"56425ac6-083d-4048-8eac-9e8a0beaac76\" (UID: \"56425ac6-083d-4048-8eac-9e8a0beaac76\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221815 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-config\") pod \"5715d7e4-5858-4146-a930-5c856cb301d6\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.221846 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jgx7\" (UniqueName: \"kubernetes.io/projected/5715d7e4-5858-4146-a930-5c856cb301d6-kube-api-access-9jgx7\") pod \"5715d7e4-5858-4146-a930-5c856cb301d6\" (UID: \"5715d7e4-5858-4146-a930-5c856cb301d6\") " Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.222980 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-client-ca" (OuterVolumeSpecName: "client-ca") pod "5715d7e4-5858-4146-a930-5c856cb301d6" (UID: "5715d7e4-5858-4146-a930-5c856cb301d6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.223053 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-client-ca" (OuterVolumeSpecName: "client-ca") pod "56425ac6-083d-4048-8eac-9e8a0beaac76" (UID: "56425ac6-083d-4048-8eac-9e8a0beaac76"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.223578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "56425ac6-083d-4048-8eac-9e8a0beaac76" (UID: "56425ac6-083d-4048-8eac-9e8a0beaac76"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.223698 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-config" (OuterVolumeSpecName: "config") pod "56425ac6-083d-4048-8eac-9e8a0beaac76" (UID: "56425ac6-083d-4048-8eac-9e8a0beaac76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.223903 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-config" (OuterVolumeSpecName: "config") pod "5715d7e4-5858-4146-a930-5c856cb301d6" (UID: "5715d7e4-5858-4146-a930-5c856cb301d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.226728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5715d7e4-5858-4146-a930-5c856cb301d6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5715d7e4-5858-4146-a930-5c856cb301d6" (UID: "5715d7e4-5858-4146-a930-5c856cb301d6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.226919 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56425ac6-083d-4048-8eac-9e8a0beaac76-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "56425ac6-083d-4048-8eac-9e8a0beaac76" (UID: "56425ac6-083d-4048-8eac-9e8a0beaac76"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.226971 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5715d7e4-5858-4146-a930-5c856cb301d6-kube-api-access-9jgx7" (OuterVolumeSpecName: "kube-api-access-9jgx7") pod "5715d7e4-5858-4146-a930-5c856cb301d6" (UID: "5715d7e4-5858-4146-a930-5c856cb301d6"). InnerVolumeSpecName "kube-api-access-9jgx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.227023 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56425ac6-083d-4048-8eac-9e8a0beaac76-kube-api-access-g2p4j" (OuterVolumeSpecName: "kube-api-access-g2p4j") pod "56425ac6-083d-4048-8eac-9e8a0beaac76" (UID: "56425ac6-083d-4048-8eac-9e8a0beaac76"). InnerVolumeSpecName "kube-api-access-g2p4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323701 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323756 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56425ac6-083d-4048-8eac-9e8a0beaac76-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323770 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2p4j\" (UniqueName: \"kubernetes.io/projected/56425ac6-083d-4048-8eac-9e8a0beaac76-kube-api-access-g2p4j\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323783 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5715d7e4-5858-4146-a930-5c856cb301d6-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323794 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jgx7\" (UniqueName: \"kubernetes.io/projected/5715d7e4-5858-4146-a930-5c856cb301d6-kube-api-access-9jgx7\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323833 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5715d7e4-5858-4146-a930-5c856cb301d6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323844 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323856 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.323867 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56425ac6-083d-4048-8eac-9e8a0beaac76-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.534392 4687 generic.go:334] "Generic (PLEG): container finished" podID="56425ac6-083d-4048-8eac-9e8a0beaac76" containerID="242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a" exitCode=0 Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.534441 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" event={"ID":"56425ac6-083d-4048-8eac-9e8a0beaac76","Type":"ContainerDied","Data":"242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a"} Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.534486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" event={"ID":"56425ac6-083d-4048-8eac-9e8a0beaac76","Type":"ContainerDied","Data":"de8f540d6a07b1bcddb834efe9d2ce171d39a5acaa8c2036fe20a5c651df0b37"} Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.534516 4687 scope.go:117] "RemoveContainer" containerID="242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.534480 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.535801 4687 generic.go:334] "Generic (PLEG): container finished" podID="5715d7e4-5858-4146-a930-5c856cb301d6" containerID="e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03" exitCode=0 Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.535833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" event={"ID":"5715d7e4-5858-4146-a930-5c856cb301d6","Type":"ContainerDied","Data":"e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03"} Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.535855 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" event={"ID":"5715d7e4-5858-4146-a930-5c856cb301d6","Type":"ContainerDied","Data":"7c89098ee3f76f496c14f24b0008e2277135bc48235cb9a60c28d33030bb23e4"} Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.535920 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.559785 4687 scope.go:117] "RemoveContainer" containerID="242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a" Jan 31 06:50:18 crc kubenswrapper[4687]: E0131 06:50:18.560238 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a\": container with ID starting with 242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a not found: ID does not exist" containerID="242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.565319 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a"} err="failed to get container status \"242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a\": rpc error: code = NotFound desc = could not find container \"242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a\": container with ID starting with 242dbc552298e685f1ce48e996dc37008ef61a34f8625367df1f6703e8b0d40a not found: ID does not exist" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.565396 4687 scope.go:117] "RemoveContainer" containerID="e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.568298 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl"] Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.574361 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5f57dc99-8v6cl"] Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.584586 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp"] Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.589044 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f55cfb95b-7gvkp"] Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.600528 4687 scope.go:117] "RemoveContainer" containerID="e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03" Jan 31 06:50:18 crc kubenswrapper[4687]: E0131 06:50:18.601067 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03\": container with ID starting with e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03 not found: ID does not exist" containerID="e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03" Jan 31 06:50:18 crc kubenswrapper[4687]: I0131 06:50:18.601112 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03"} err="failed to get container status \"e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03\": rpc error: code = NotFound desc = could not find container \"e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03\": container with ID starting with e5626bca1e62263adcf7f1cb09fb62bdc4e0e3762cca040ea49e4ad4efd12e03 not found: ID does not exist" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.621060 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56425ac6-083d-4048-8eac-9e8a0beaac76" path="/var/lib/kubelet/pods/56425ac6-083d-4048-8eac-9e8a0beaac76/volumes" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.624941 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5715d7e4-5858-4146-a930-5c856cb301d6" path="/var/lib/kubelet/pods/5715d7e4-5858-4146-a930-5c856cb301d6/volumes" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.669101 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65f86f88c7-kkzm9"] Jan 31 06:50:19 crc kubenswrapper[4687]: E0131 06:50:19.669572 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5715d7e4-5858-4146-a930-5c856cb301d6" containerName="route-controller-manager" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.669604 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5715d7e4-5858-4146-a930-5c856cb301d6" containerName="route-controller-manager" Jan 31 06:50:19 crc kubenswrapper[4687]: E0131 06:50:19.669628 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56425ac6-083d-4048-8eac-9e8a0beaac76" containerName="controller-manager" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.669642 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="56425ac6-083d-4048-8eac-9e8a0beaac76" containerName="controller-manager" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.669842 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="56425ac6-083d-4048-8eac-9e8a0beaac76" containerName="controller-manager" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.669873 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5715d7e4-5858-4146-a930-5c856cb301d6" containerName="route-controller-manager" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.670532 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.672560 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz"] Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.673333 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.673441 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.673524 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.673582 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.673878 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.674357 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.679294 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65f86f88c7-kkzm9"] Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.679440 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.680547 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.683291 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz"] Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.684525 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.684662 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.684994 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.685055 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.685260 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.685570 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845249 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf2e1f0-778b-499e-8160-58cc440a9b23-serving-cert\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845304 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-config\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ffv\" (UniqueName: \"kubernetes.io/projected/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-kube-api-access-v6ffv\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-serving-cert\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845374 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-config\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845577 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhnzl\" (UniqueName: \"kubernetes.io/projected/cbf2e1f0-778b-499e-8160-58cc440a9b23-kube-api-access-bhnzl\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845662 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-proxy-ca-bundles\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-client-ca\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.845789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-client-ca\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946485 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhnzl\" (UniqueName: \"kubernetes.io/projected/cbf2e1f0-778b-499e-8160-58cc440a9b23-kube-api-access-bhnzl\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946578 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-proxy-ca-bundles\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-client-ca\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946673 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-client-ca\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946716 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf2e1f0-778b-499e-8160-58cc440a9b23-serving-cert\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946740 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-config\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-serving-cert\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946794 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6ffv\" (UniqueName: \"kubernetes.io/projected/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-kube-api-access-v6ffv\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.946824 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-config\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.948264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-config\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.949431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-client-ca\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.949456 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-client-ca\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.949831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-config\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.950060 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-proxy-ca-bundles\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.951205 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf2e1f0-778b-499e-8160-58cc440a9b23-serving-cert\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.951780 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-serving-cert\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.962767 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6ffv\" (UniqueName: \"kubernetes.io/projected/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-kube-api-access-v6ffv\") pod \"route-controller-manager-54c7fc76ff-xf8jz\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.962842 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhnzl\" (UniqueName: \"kubernetes.io/projected/cbf2e1f0-778b-499e-8160-58cc440a9b23-kube-api-access-bhnzl\") pod \"controller-manager-65f86f88c7-kkzm9\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:19 crc kubenswrapper[4687]: I0131 06:50:19.996759 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:20 crc kubenswrapper[4687]: I0131 06:50:20.011246 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:20 crc kubenswrapper[4687]: I0131 06:50:20.425737 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65f86f88c7-kkzm9"] Jan 31 06:50:20 crc kubenswrapper[4687]: I0131 06:50:20.471651 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz"] Jan 31 06:50:20 crc kubenswrapper[4687]: W0131 06:50:20.476426 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc4bb88_e183_4cd4_a4a2_73fc4d3d4191.slice/crio-0564f4e3c2bf2a4920da3701ae8e9d42f628a4811bdb73c646efc0d78a080471 WatchSource:0}: Error finding container 0564f4e3c2bf2a4920da3701ae8e9d42f628a4811bdb73c646efc0d78a080471: Status 404 returned error can't find the container with id 0564f4e3c2bf2a4920da3701ae8e9d42f628a4811bdb73c646efc0d78a080471 Jan 31 06:50:20 crc kubenswrapper[4687]: I0131 06:50:20.558097 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" event={"ID":"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191","Type":"ContainerStarted","Data":"0564f4e3c2bf2a4920da3701ae8e9d42f628a4811bdb73c646efc0d78a080471"} Jan 31 06:50:20 crc kubenswrapper[4687]: I0131 06:50:20.559266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" event={"ID":"cbf2e1f0-778b-499e-8160-58cc440a9b23","Type":"ContainerStarted","Data":"9d2fd6a05208d5a7e8fd9ad964f1b24bb2008f278160c3c6c99ccd7e617164ec"} Jan 31 06:50:21 crc kubenswrapper[4687]: I0131 06:50:21.566252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" event={"ID":"cbf2e1f0-778b-499e-8160-58cc440a9b23","Type":"ContainerStarted","Data":"28fd7ed206aa03b7e8527d7a6dac55cb0cc847d3385d4afe06443df15550aecd"} Jan 31 06:50:21 crc kubenswrapper[4687]: I0131 06:50:21.566634 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:21 crc kubenswrapper[4687]: I0131 06:50:21.567875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" event={"ID":"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191","Type":"ContainerStarted","Data":"ee802957f31b25e30dc320be407d7137e2a52b0756ddda8de887d9490b01e8ba"} Jan 31 06:50:21 crc kubenswrapper[4687]: I0131 06:50:21.568124 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:21 crc kubenswrapper[4687]: I0131 06:50:21.571811 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:21 crc kubenswrapper[4687]: I0131 06:50:21.572634 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:21 crc kubenswrapper[4687]: I0131 06:50:21.586662 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" podStartSLOduration=4.58664583 podStartE2EDuration="4.58664583s" podCreationTimestamp="2026-01-31 06:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:50:21.582152838 +0000 UTC m=+447.859412413" watchObservedRunningTime="2026-01-31 06:50:21.58664583 +0000 UTC m=+447.863905395" Jan 31 06:50:21 crc kubenswrapper[4687]: I0131 06:50:21.603883 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" podStartSLOduration=4.6038654900000004 podStartE2EDuration="4.60386549s" podCreationTimestamp="2026-01-31 06:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:50:21.599606624 +0000 UTC m=+447.876866199" watchObservedRunningTime="2026-01-31 06:50:21.60386549 +0000 UTC m=+447.881125065" Jan 31 06:50:25 crc kubenswrapper[4687]: I0131 06:50:25.756992 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l2btx"] Jan 31 06:50:25 crc kubenswrapper[4687]: I0131 06:50:25.757781 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l2btx" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="registry-server" containerID="cri-o://25c6810dfc2b19b46d120d51a4fc898eed020e65d507c50d8bf13005d344aca3" gracePeriod=2 Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.602498 4687 generic.go:334] "Generic (PLEG): container finished" podID="8ed021eb-a227-4014-a487-72aa0de25bac" containerID="25c6810dfc2b19b46d120d51a4fc898eed020e65d507c50d8bf13005d344aca3" exitCode=0 Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.602552 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2btx" event={"ID":"8ed021eb-a227-4014-a487-72aa0de25bac","Type":"ContainerDied","Data":"25c6810dfc2b19b46d120d51a4fc898eed020e65d507c50d8bf13005d344aca3"} Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.825991 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.938810 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj6s4\" (UniqueName: \"kubernetes.io/projected/8ed021eb-a227-4014-a487-72aa0de25bac-kube-api-access-tj6s4\") pod \"8ed021eb-a227-4014-a487-72aa0de25bac\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.938888 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-utilities\") pod \"8ed021eb-a227-4014-a487-72aa0de25bac\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.938976 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-catalog-content\") pod \"8ed021eb-a227-4014-a487-72aa0de25bac\" (UID: \"8ed021eb-a227-4014-a487-72aa0de25bac\") " Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.940305 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-utilities" (OuterVolumeSpecName: "utilities") pod "8ed021eb-a227-4014-a487-72aa0de25bac" (UID: "8ed021eb-a227-4014-a487-72aa0de25bac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.949260 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed021eb-a227-4014-a487-72aa0de25bac-kube-api-access-tj6s4" (OuterVolumeSpecName: "kube-api-access-tj6s4") pod "8ed021eb-a227-4014-a487-72aa0de25bac" (UID: "8ed021eb-a227-4014-a487-72aa0de25bac"). InnerVolumeSpecName "kube-api-access-tj6s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:26 crc kubenswrapper[4687]: I0131 06:50:26.988439 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ed021eb-a227-4014-a487-72aa0de25bac" (UID: "8ed021eb-a227-4014-a487-72aa0de25bac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.040083 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj6s4\" (UniqueName: \"kubernetes.io/projected/8ed021eb-a227-4014-a487-72aa0de25bac-kube-api-access-tj6s4\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.040142 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.040155 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ed021eb-a227-4014-a487-72aa0de25bac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.154448 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j46rp"] Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.154726 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j46rp" podUID="dceba003-329b-4858-a9d2-7499eef39366" containerName="registry-server" containerID="cri-o://46178f3882844e754cc54e999ed3c0f1fce1ca4c536309f64dfac228c8d8d2a3" gracePeriod=2 Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.609913 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2btx" Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.609970 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2btx" event={"ID":"8ed021eb-a227-4014-a487-72aa0de25bac","Type":"ContainerDied","Data":"2502c52c2d1cd0bfd1f3e48fb2aa6630612be228eed44211fe5f0b5343fafd74"} Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.610022 4687 scope.go:117] "RemoveContainer" containerID="25c6810dfc2b19b46d120d51a4fc898eed020e65d507c50d8bf13005d344aca3" Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.612898 4687 generic.go:334] "Generic (PLEG): container finished" podID="dceba003-329b-4858-a9d2-7499eef39366" containerID="46178f3882844e754cc54e999ed3c0f1fce1ca4c536309f64dfac228c8d8d2a3" exitCode=0 Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.612964 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j46rp" event={"ID":"dceba003-329b-4858-a9d2-7499eef39366","Type":"ContainerDied","Data":"46178f3882844e754cc54e999ed3c0f1fce1ca4c536309f64dfac228c8d8d2a3"} Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.624699 4687 scope.go:117] "RemoveContainer" containerID="bf85af373958e1e93d1f8f11d4ac20928993edbe7f6dbb8559d83fe06014bc38" Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.638010 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l2btx"] Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.640798 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l2btx"] Jan 31 06:50:27 crc kubenswrapper[4687]: I0131 06:50:27.658931 4687 scope.go:117] "RemoveContainer" containerID="2016c73be2e3dfb1602c64078c9d4ebbed9c3653ea93407b1e064237f0062675" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.151730 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkfv6"] Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.152282 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xkfv6" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerName="registry-server" containerID="cri-o://a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c" gracePeriod=2 Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.296572 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.457703 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26dw\" (UniqueName: \"kubernetes.io/projected/dceba003-329b-4858-a9d2-7499eef39366-kube-api-access-r26dw\") pod \"dceba003-329b-4858-a9d2-7499eef39366\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.457772 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-catalog-content\") pod \"dceba003-329b-4858-a9d2-7499eef39366\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.457849 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-utilities\") pod \"dceba003-329b-4858-a9d2-7499eef39366\" (UID: \"dceba003-329b-4858-a9d2-7499eef39366\") " Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.459045 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-utilities" (OuterVolumeSpecName: "utilities") pod "dceba003-329b-4858-a9d2-7499eef39366" (UID: "dceba003-329b-4858-a9d2-7499eef39366"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.466319 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dceba003-329b-4858-a9d2-7499eef39366-kube-api-access-r26dw" (OuterVolumeSpecName: "kube-api-access-r26dw") pod "dceba003-329b-4858-a9d2-7499eef39366" (UID: "dceba003-329b-4858-a9d2-7499eef39366"). InnerVolumeSpecName "kube-api-access-r26dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.514880 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dceba003-329b-4858-a9d2-7499eef39366" (UID: "dceba003-329b-4858-a9d2-7499eef39366"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.560097 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.560405 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dceba003-329b-4858-a9d2-7499eef39366-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.560880 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r26dw\" (UniqueName: \"kubernetes.io/projected/dceba003-329b-4858-a9d2-7499eef39366-kube-api-access-r26dw\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.627204 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j46rp" event={"ID":"dceba003-329b-4858-a9d2-7499eef39366","Type":"ContainerDied","Data":"95fb4a6a8808c2b3a4f3c599756db89c2c32fe027b18f45bd693cfffa1242d19"} Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.627266 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j46rp" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.627336 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.627279 4687 scope.go:117] "RemoveContainer" containerID="46178f3882844e754cc54e999ed3c0f1fce1ca4c536309f64dfac228c8d8d2a3" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.634114 4687 generic.go:334] "Generic (PLEG): container finished" podID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerID="a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c" exitCode=0 Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.634168 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkfv6" event={"ID":"267c7942-99ed-42bc-bb0c-3d2a2119267e","Type":"ContainerDied","Data":"a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c"} Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.634195 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkfv6" event={"ID":"267c7942-99ed-42bc-bb0c-3d2a2119267e","Type":"ContainerDied","Data":"9a904c446d3fe91bc90076ab7632ee4b16e869de24378702cdd9a620e1f50946"} Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.634253 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkfv6" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.666402 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j46rp"] Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.669847 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j46rp"] Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.669930 4687 scope.go:117] "RemoveContainer" containerID="6440e22ec10ad1507f54e35a6eb2c77fb13a3bc7d6db5b0006ae0965f7d232d2" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.693401 4687 scope.go:117] "RemoveContainer" containerID="963fc72e7a5eca876faf40595961f07491c7d65d1b0aab34cb731fb96ac9e02f" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.711765 4687 scope.go:117] "RemoveContainer" containerID="a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.726858 4687 scope.go:117] "RemoveContainer" containerID="7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.742783 4687 scope.go:117] "RemoveContainer" containerID="0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.756656 4687 scope.go:117] "RemoveContainer" containerID="a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c" Jan 31 06:50:28 crc kubenswrapper[4687]: E0131 06:50:28.757777 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c\": container with ID starting with a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c not found: ID does not exist" containerID="a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.757827 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c"} err="failed to get container status \"a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c\": rpc error: code = NotFound desc = could not find container \"a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c\": container with ID starting with a90a7cb91b73807835a058e8a1acd8c5179f90abc7a90ce3662fd9d792000a8c not found: ID does not exist" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.757860 4687 scope.go:117] "RemoveContainer" containerID="7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697" Jan 31 06:50:28 crc kubenswrapper[4687]: E0131 06:50:28.758293 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697\": container with ID starting with 7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697 not found: ID does not exist" containerID="7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.758323 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697"} err="failed to get container status \"7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697\": rpc error: code = NotFound desc = could not find container \"7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697\": container with ID starting with 7b1d0bf90aed5e9fe52815c166939edde7d0c8e809a40a489d06d8f0c71c7697 not found: ID does not exist" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.758346 4687 scope.go:117] "RemoveContainer" containerID="0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3" Jan 31 06:50:28 crc kubenswrapper[4687]: E0131 06:50:28.758564 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3\": container with ID starting with 0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3 not found: ID does not exist" containerID="0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.758583 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3"} err="failed to get container status \"0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3\": rpc error: code = NotFound desc = could not find container \"0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3\": container with ID starting with 0b0458d6d7fbe76b6736e307a256b2fdb03957fd3cd3f2394af5ee1475fe5cc3 not found: ID does not exist" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.771087 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqpxr\" (UniqueName: \"kubernetes.io/projected/267c7942-99ed-42bc-bb0c-3d2a2119267e-kube-api-access-nqpxr\") pod \"267c7942-99ed-42bc-bb0c-3d2a2119267e\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.771218 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-utilities\") pod \"267c7942-99ed-42bc-bb0c-3d2a2119267e\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.771241 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-catalog-content\") pod \"267c7942-99ed-42bc-bb0c-3d2a2119267e\" (UID: \"267c7942-99ed-42bc-bb0c-3d2a2119267e\") " Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.772992 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-utilities" (OuterVolumeSpecName: "utilities") pod "267c7942-99ed-42bc-bb0c-3d2a2119267e" (UID: "267c7942-99ed-42bc-bb0c-3d2a2119267e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.778753 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267c7942-99ed-42bc-bb0c-3d2a2119267e-kube-api-access-nqpxr" (OuterVolumeSpecName: "kube-api-access-nqpxr") pod "267c7942-99ed-42bc-bb0c-3d2a2119267e" (UID: "267c7942-99ed-42bc-bb0c-3d2a2119267e"). InnerVolumeSpecName "kube-api-access-nqpxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.793764 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "267c7942-99ed-42bc-bb0c-3d2a2119267e" (UID: "267c7942-99ed-42bc-bb0c-3d2a2119267e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.872763 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.872807 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267c7942-99ed-42bc-bb0c-3d2a2119267e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.872820 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqpxr\" (UniqueName: \"kubernetes.io/projected/267c7942-99ed-42bc-bb0c-3d2a2119267e-kube-api-access-nqpxr\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.962027 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkfv6"] Jan 31 06:50:28 crc kubenswrapper[4687]: I0131 06:50:28.965440 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkfv6"] Jan 31 06:50:29 crc kubenswrapper[4687]: I0131 06:50:29.553474 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrjq6"] Jan 31 06:50:29 crc kubenswrapper[4687]: I0131 06:50:29.554079 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mrjq6" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerName="registry-server" containerID="cri-o://d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee" gracePeriod=2 Jan 31 06:50:29 crc kubenswrapper[4687]: I0131 06:50:29.610388 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" path="/var/lib/kubelet/pods/267c7942-99ed-42bc-bb0c-3d2a2119267e/volumes" Jan 31 06:50:29 crc kubenswrapper[4687]: I0131 06:50:29.611301 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" path="/var/lib/kubelet/pods/8ed021eb-a227-4014-a487-72aa0de25bac/volumes" Jan 31 06:50:29 crc kubenswrapper[4687]: I0131 06:50:29.612024 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dceba003-329b-4858-a9d2-7499eef39366" path="/var/lib/kubelet/pods/dceba003-329b-4858-a9d2-7499eef39366/volumes" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.582983 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.646715 4687 generic.go:334] "Generic (PLEG): container finished" podID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerID="d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee" exitCode=0 Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.646758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrjq6" event={"ID":"d9539b4b-d10e-4607-9195-0acd7cee10c8","Type":"ContainerDied","Data":"d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee"} Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.646785 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrjq6" event={"ID":"d9539b4b-d10e-4607-9195-0acd7cee10c8","Type":"ContainerDied","Data":"0c6f5861153e6a07b30cdad33611c2d9f12284f39385f73710e7b4e16cdab4b3"} Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.646801 4687 scope.go:117] "RemoveContainer" containerID="d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.646805 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrjq6" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.660510 4687 scope.go:117] "RemoveContainer" containerID="98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.673668 4687 scope.go:117] "RemoveContainer" containerID="c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.687261 4687 scope.go:117] "RemoveContainer" containerID="d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee" Jan 31 06:50:30 crc kubenswrapper[4687]: E0131 06:50:30.687820 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee\": container with ID starting with d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee not found: ID does not exist" containerID="d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.687867 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee"} err="failed to get container status \"d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee\": rpc error: code = NotFound desc = could not find container \"d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee\": container with ID starting with d41697ddadcd64d35f650898d504635d2850beb45d80908ca74bb2e5fe3de8ee not found: ID does not exist" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.687900 4687 scope.go:117] "RemoveContainer" containerID="98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb" Jan 31 06:50:30 crc kubenswrapper[4687]: E0131 06:50:30.688267 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb\": container with ID starting with 98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb not found: ID does not exist" containerID="98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.688301 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb"} err="failed to get container status \"98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb\": rpc error: code = NotFound desc = could not find container \"98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb\": container with ID starting with 98a51efababe836aad2f815475c8ea521fe488d2f319bc19e0709c2ed2baeffb not found: ID does not exist" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.688322 4687 scope.go:117] "RemoveContainer" containerID="c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b" Jan 31 06:50:30 crc kubenswrapper[4687]: E0131 06:50:30.688658 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b\": container with ID starting with c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b not found: ID does not exist" containerID="c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.688690 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b"} err="failed to get container status \"c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b\": rpc error: code = NotFound desc = could not find container \"c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b\": container with ID starting with c1f7339bda54e308bdd2c68627cc1662fd2aef5584e32fb3958515bdded0f27b not found: ID does not exist" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.696154 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdj8t\" (UniqueName: \"kubernetes.io/projected/d9539b4b-d10e-4607-9195-0acd7cee10c8-kube-api-access-xdj8t\") pod \"d9539b4b-d10e-4607-9195-0acd7cee10c8\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.696218 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-utilities\") pod \"d9539b4b-d10e-4607-9195-0acd7cee10c8\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.696251 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-catalog-content\") pod \"d9539b4b-d10e-4607-9195-0acd7cee10c8\" (UID: \"d9539b4b-d10e-4607-9195-0acd7cee10c8\") " Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.697289 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-utilities" (OuterVolumeSpecName: "utilities") pod "d9539b4b-d10e-4607-9195-0acd7cee10c8" (UID: "d9539b4b-d10e-4607-9195-0acd7cee10c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.700896 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9539b4b-d10e-4607-9195-0acd7cee10c8-kube-api-access-xdj8t" (OuterVolumeSpecName: "kube-api-access-xdj8t") pod "d9539b4b-d10e-4607-9195-0acd7cee10c8" (UID: "d9539b4b-d10e-4607-9195-0acd7cee10c8"). InnerVolumeSpecName "kube-api-access-xdj8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.797325 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdj8t\" (UniqueName: \"kubernetes.io/projected/d9539b4b-d10e-4607-9195-0acd7cee10c8-kube-api-access-xdj8t\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.797364 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.807446 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9539b4b-d10e-4607-9195-0acd7cee10c8" (UID: "d9539b4b-d10e-4607-9195-0acd7cee10c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.898359 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9539b4b-d10e-4607-9195-0acd7cee10c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.971151 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrjq6"] Jan 31 06:50:30 crc kubenswrapper[4687]: I0131 06:50:30.975679 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mrjq6"] Jan 31 06:50:31 crc kubenswrapper[4687]: I0131 06:50:31.611333 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" path="/var/lib/kubelet/pods/d9539b4b-d10e-4607-9195-0acd7cee10c8/volumes" Jan 31 06:50:37 crc kubenswrapper[4687]: I0131 06:50:37.672749 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65f86f88c7-kkzm9"] Jan 31 06:50:37 crc kubenswrapper[4687]: I0131 06:50:37.673371 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" podUID="cbf2e1f0-778b-499e-8160-58cc440a9b23" containerName="controller-manager" containerID="cri-o://28fd7ed206aa03b7e8527d7a6dac55cb0cc847d3385d4afe06443df15550aecd" gracePeriod=30 Jan 31 06:50:37 crc kubenswrapper[4687]: I0131 06:50:37.697915 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz"] Jan 31 06:50:37 crc kubenswrapper[4687]: I0131 06:50:37.698221 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" podUID="ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" containerName="route-controller-manager" containerID="cri-o://ee802957f31b25e30dc320be407d7137e2a52b0756ddda8de887d9490b01e8ba" gracePeriod=30 Jan 31 06:50:38 crc kubenswrapper[4687]: I0131 06:50:38.700059 4687 generic.go:334] "Generic (PLEG): container finished" podID="ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" containerID="ee802957f31b25e30dc320be407d7137e2a52b0756ddda8de887d9490b01e8ba" exitCode=0 Jan 31 06:50:38 crc kubenswrapper[4687]: I0131 06:50:38.700186 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" event={"ID":"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191","Type":"ContainerDied","Data":"ee802957f31b25e30dc320be407d7137e2a52b0756ddda8de887d9490b01e8ba"} Jan 31 06:50:38 crc kubenswrapper[4687]: I0131 06:50:38.702170 4687 generic.go:334] "Generic (PLEG): container finished" podID="cbf2e1f0-778b-499e-8160-58cc440a9b23" containerID="28fd7ed206aa03b7e8527d7a6dac55cb0cc847d3385d4afe06443df15550aecd" exitCode=0 Jan 31 06:50:38 crc kubenswrapper[4687]: I0131 06:50:38.702215 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" event={"ID":"cbf2e1f0-778b-499e-8160-58cc440a9b23","Type":"ContainerDied","Data":"28fd7ed206aa03b7e8527d7a6dac55cb0cc847d3385d4afe06443df15550aecd"} Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.153716 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.185558 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm"] Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.185980 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.186060 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.186156 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dceba003-329b-4858-a9d2-7499eef39366" containerName="extract-content" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.186228 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dceba003-329b-4858-a9d2-7499eef39366" containerName="extract-content" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.186363 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="extract-utilities" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.186450 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="extract-utilities" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.186531 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" containerName="route-controller-manager" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.186599 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" containerName="route-controller-manager" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.186664 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.186725 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.186790 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerName="extract-utilities" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.186861 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerName="extract-utilities" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.186943 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.187008 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.187066 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerName="extract-utilities" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.187125 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerName="extract-utilities" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.187188 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerName="extract-content" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.187249 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerName="extract-content" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.187309 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dceba003-329b-4858-a9d2-7499eef39366" containerName="extract-utilities" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.187366 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dceba003-329b-4858-a9d2-7499eef39366" containerName="extract-utilities" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.187443 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dceba003-329b-4858-a9d2-7499eef39366" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.187499 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dceba003-329b-4858-a9d2-7499eef39366" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.187565 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="extract-content" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.187626 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="extract-content" Jan 31 06:50:39 crc kubenswrapper[4687]: E0131 06:50:39.187680 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerName="extract-content" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.187749 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerName="extract-content" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.190834 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" containerName="route-controller-manager" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.191126 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="267c7942-99ed-42bc-bb0c-3d2a2119267e" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.191151 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dceba003-329b-4858-a9d2-7499eef39366" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.191180 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9539b4b-d10e-4607-9195-0acd7cee10c8" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.191205 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed021eb-a227-4014-a487-72aa0de25bac" containerName="registry-server" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.192131 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.211497 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm"] Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.245921 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.316688 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-serving-cert\") pod \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.316761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6ffv\" (UniqueName: \"kubernetes.io/projected/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-kube-api-access-v6ffv\") pod \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.316870 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-config\") pod \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.316937 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-client-ca\") pod \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\" (UID: \"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.317110 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d17a2c-0d56-4668-859d-d10a59f7f9a3-serving-cert\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.317156 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68d17a2c-0d56-4668-859d-d10a59f7f9a3-client-ca\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.317212 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwsh\" (UniqueName: \"kubernetes.io/projected/68d17a2c-0d56-4668-859d-d10a59f7f9a3-kube-api-access-djwsh\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.317257 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d17a2c-0d56-4668-859d-d10a59f7f9a3-config\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.318114 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-client-ca" (OuterVolumeSpecName: "client-ca") pod "ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" (UID: "ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.318172 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-config" (OuterVolumeSpecName: "config") pod "ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" (UID: "ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.322049 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-kube-api-access-v6ffv" (OuterVolumeSpecName: "kube-api-access-v6ffv") pod "ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" (UID: "ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191"). InnerVolumeSpecName "kube-api-access-v6ffv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.322060 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" (UID: "ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.417931 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-proxy-ca-bundles\") pod \"cbf2e1f0-778b-499e-8160-58cc440a9b23\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.418035 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhnzl\" (UniqueName: \"kubernetes.io/projected/cbf2e1f0-778b-499e-8160-58cc440a9b23-kube-api-access-bhnzl\") pod \"cbf2e1f0-778b-499e-8160-58cc440a9b23\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.418080 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-config\") pod \"cbf2e1f0-778b-499e-8160-58cc440a9b23\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.418113 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf2e1f0-778b-499e-8160-58cc440a9b23-serving-cert\") pod \"cbf2e1f0-778b-499e-8160-58cc440a9b23\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.418165 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-client-ca\") pod \"cbf2e1f0-778b-499e-8160-58cc440a9b23\" (UID: \"cbf2e1f0-778b-499e-8160-58cc440a9b23\") " Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.418329 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d17a2c-0d56-4668-859d-d10a59f7f9a3-serving-cert\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.418969 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-client-ca" (OuterVolumeSpecName: "client-ca") pod "cbf2e1f0-778b-499e-8160-58cc440a9b23" (UID: "cbf2e1f0-778b-499e-8160-58cc440a9b23"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.418984 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cbf2e1f0-778b-499e-8160-58cc440a9b23" (UID: "cbf2e1f0-778b-499e-8160-58cc440a9b23"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419014 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68d17a2c-0d56-4668-859d-d10a59f7f9a3-client-ca\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419080 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwsh\" (UniqueName: \"kubernetes.io/projected/68d17a2c-0d56-4668-859d-d10a59f7f9a3-kube-api-access-djwsh\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419122 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d17a2c-0d56-4668-859d-d10a59f7f9a3-config\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419122 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-config" (OuterVolumeSpecName: "config") pod "cbf2e1f0-778b-499e-8160-58cc440a9b23" (UID: "cbf2e1f0-778b-499e-8160-58cc440a9b23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419206 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419222 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419234 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419245 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6ffv\" (UniqueName: \"kubernetes.io/projected/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-kube-api-access-v6ffv\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419259 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419270 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf2e1f0-778b-499e-8160-58cc440a9b23-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419281 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.419953 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68d17a2c-0d56-4668-859d-d10a59f7f9a3-client-ca\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.421500 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf2e1f0-778b-499e-8160-58cc440a9b23-kube-api-access-bhnzl" (OuterVolumeSpecName: "kube-api-access-bhnzl") pod "cbf2e1f0-778b-499e-8160-58cc440a9b23" (UID: "cbf2e1f0-778b-499e-8160-58cc440a9b23"). InnerVolumeSpecName "kube-api-access-bhnzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.421659 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf2e1f0-778b-499e-8160-58cc440a9b23-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cbf2e1f0-778b-499e-8160-58cc440a9b23" (UID: "cbf2e1f0-778b-499e-8160-58cc440a9b23"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.421811 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d17a2c-0d56-4668-859d-d10a59f7f9a3-config\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.422107 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d17a2c-0d56-4668-859d-d10a59f7f9a3-serving-cert\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.433813 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwsh\" (UniqueName: \"kubernetes.io/projected/68d17a2c-0d56-4668-859d-d10a59f7f9a3-kube-api-access-djwsh\") pod \"route-controller-manager-b6cb86894-njjqm\" (UID: \"68d17a2c-0d56-4668-859d-d10a59f7f9a3\") " pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.520635 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhnzl\" (UniqueName: \"kubernetes.io/projected/cbf2e1f0-778b-499e-8160-58cc440a9b23-kube-api-access-bhnzl\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.520669 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf2e1f0-778b-499e-8160-58cc440a9b23-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.543725 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.716817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" event={"ID":"ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191","Type":"ContainerDied","Data":"0564f4e3c2bf2a4920da3701ae8e9d42f628a4811bdb73c646efc0d78a080471"} Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.716831 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.717217 4687 scope.go:117] "RemoveContainer" containerID="ee802957f31b25e30dc320be407d7137e2a52b0756ddda8de887d9490b01e8ba" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.722287 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" event={"ID":"cbf2e1f0-778b-499e-8160-58cc440a9b23","Type":"ContainerDied","Data":"9d2fd6a05208d5a7e8fd9ad964f1b24bb2008f278160c3c6c99ccd7e617164ec"} Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.722366 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65f86f88c7-kkzm9" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.744137 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz"] Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.750959 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c7fc76ff-xf8jz"] Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.753988 4687 scope.go:117] "RemoveContainer" containerID="28fd7ed206aa03b7e8527d7a6dac55cb0cc847d3385d4afe06443df15550aecd" Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.754692 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65f86f88c7-kkzm9"] Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.757607 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65f86f88c7-kkzm9"] Jan 31 06:50:39 crc kubenswrapper[4687]: I0131 06:50:39.963131 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm"] Jan 31 06:50:39 crc kubenswrapper[4687]: W0131 06:50:39.968107 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68d17a2c_0d56_4668_859d_d10a59f7f9a3.slice/crio-cb670ca1d899f34659ab73f1b529b8d55f1cdc5ff2e333c320343276d85457cf WatchSource:0}: Error finding container cb670ca1d899f34659ab73f1b529b8d55f1cdc5ff2e333c320343276d85457cf: Status 404 returned error can't find the container with id cb670ca1d899f34659ab73f1b529b8d55f1cdc5ff2e333c320343276d85457cf Jan 31 06:50:40 crc kubenswrapper[4687]: I0131 06:50:40.015489 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" podUID="ea6afbaa-a516-45e0-bbd8-199b879e2654" containerName="oauth-openshift" containerID="cri-o://852231b1387fd3d60836e9358005d35936f0194543fdceb35a1d61c57ac4ea5c" gracePeriod=15 Jan 31 06:50:40 crc kubenswrapper[4687]: I0131 06:50:40.734039 4687 generic.go:334] "Generic (PLEG): container finished" podID="ea6afbaa-a516-45e0-bbd8-199b879e2654" containerID="852231b1387fd3d60836e9358005d35936f0194543fdceb35a1d61c57ac4ea5c" exitCode=0 Jan 31 06:50:40 crc kubenswrapper[4687]: I0131 06:50:40.734121 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" event={"ID":"ea6afbaa-a516-45e0-bbd8-199b879e2654","Type":"ContainerDied","Data":"852231b1387fd3d60836e9358005d35936f0194543fdceb35a1d61c57ac4ea5c"} Jan 31 06:50:40 crc kubenswrapper[4687]: I0131 06:50:40.735609 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" event={"ID":"68d17a2c-0d56-4668-859d-d10a59f7f9a3","Type":"ContainerStarted","Data":"5f5304a40ebb418fd3ba81db3154db71be67826d51062dd4c123a7f2b02db2c6"} Jan 31 06:50:40 crc kubenswrapper[4687]: I0131 06:50:40.735633 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" event={"ID":"68d17a2c-0d56-4668-859d-d10a59f7f9a3","Type":"ContainerStarted","Data":"cb670ca1d899f34659ab73f1b529b8d55f1cdc5ff2e333c320343276d85457cf"} Jan 31 06:50:40 crc kubenswrapper[4687]: I0131 06:50:40.736020 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:40 crc kubenswrapper[4687]: I0131 06:50:40.746702 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" Jan 31 06:50:40 crc kubenswrapper[4687]: I0131 06:50:40.764776 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b6cb86894-njjqm" podStartSLOduration=3.764753513 podStartE2EDuration="3.764753513s" podCreationTimestamp="2026-01-31 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:50:40.761097764 +0000 UTC m=+467.038357349" watchObservedRunningTime="2026-01-31 06:50:40.764753513 +0000 UTC m=+467.042013128" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.222192 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.363540 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-serving-cert\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364636 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-idp-0-file-data\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364736 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-provider-selection\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364770 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-trusted-ca-bundle\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364795 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-login\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364824 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-ocp-branding-template\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364850 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-dir\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364885 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf5rs\" (UniqueName: \"kubernetes.io/projected/ea6afbaa-a516-45e0-bbd8-199b879e2654-kube-api-access-vf5rs\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364919 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-router-certs\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.364964 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-error\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.365002 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-session\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.365030 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-policies\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.365073 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-cliconfig\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.365096 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-service-ca\") pod \"ea6afbaa-a516-45e0-bbd8-199b879e2654\" (UID: \"ea6afbaa-a516-45e0-bbd8-199b879e2654\") " Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.365514 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.365914 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.366825 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.366852 4687 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.366841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.366856 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.366933 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.369498 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6afbaa-a516-45e0-bbd8-199b879e2654-kube-api-access-vf5rs" (OuterVolumeSpecName: "kube-api-access-vf5rs") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "kube-api-access-vf5rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.369497 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.373873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.374206 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.374432 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.374657 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.374845 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.376724 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.377799 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ea6afbaa-a516-45e0-bbd8-199b879e2654" (UID: "ea6afbaa-a516-45e0-bbd8-199b879e2654"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467455 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467491 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467506 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467517 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467527 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf5rs\" (UniqueName: \"kubernetes.io/projected/ea6afbaa-a516-45e0-bbd8-199b879e2654-kube-api-access-vf5rs\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467536 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467545 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467555 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467572 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467586 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467603 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.467615 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6afbaa-a516-45e0-bbd8-199b879e2654-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.614863 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf2e1f0-778b-499e-8160-58cc440a9b23" path="/var/lib/kubelet/pods/cbf2e1f0-778b-499e-8160-58cc440a9b23/volumes" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.616337 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191" path="/var/lib/kubelet/pods/ffc4bb88-e183-4cd4-a4a2-73fc4d3d4191/volumes" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.691762 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f956cb9bd-86fg4"] Jan 31 06:50:41 crc kubenswrapper[4687]: E0131 06:50:41.692106 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6afbaa-a516-45e0-bbd8-199b879e2654" containerName="oauth-openshift" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.692129 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6afbaa-a516-45e0-bbd8-199b879e2654" containerName="oauth-openshift" Jan 31 06:50:41 crc kubenswrapper[4687]: E0131 06:50:41.692157 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf2e1f0-778b-499e-8160-58cc440a9b23" containerName="controller-manager" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.692168 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf2e1f0-778b-499e-8160-58cc440a9b23" containerName="controller-manager" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.692321 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea6afbaa-a516-45e0-bbd8-199b879e2654" containerName="oauth-openshift" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.692344 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf2e1f0-778b-499e-8160-58cc440a9b23" containerName="controller-manager" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.692883 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.701643 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.701808 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.702113 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.702160 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.703489 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.706148 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.720802 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.737496 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f956cb9bd-86fg4"] Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.742134 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" event={"ID":"ea6afbaa-a516-45e0-bbd8-199b879e2654","Type":"ContainerDied","Data":"725662c67fc711ecf1cd3bb9936ff031d3b56978cb0b0953d9a1fb82799cfee1"} Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.742194 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6qn9w" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.742206 4687 scope.go:117] "RemoveContainer" containerID="852231b1387fd3d60836e9358005d35936f0194543fdceb35a1d61c57ac4ea5c" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.770356 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-proxy-ca-bundles\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.770511 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-config\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.770566 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee7b782-13cb-4792-8297-99b4c745babd-serving-cert\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.771044 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fc9\" (UniqueName: \"kubernetes.io/projected/7ee7b782-13cb-4792-8297-99b4c745babd-kube-api-access-l9fc9\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.771081 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-client-ca\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.773925 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6qn9w"] Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.777626 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6qn9w"] Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.871750 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-proxy-ca-bundles\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.871866 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-config\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.871908 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee7b782-13cb-4792-8297-99b4c745babd-serving-cert\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.871973 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9fc9\" (UniqueName: \"kubernetes.io/projected/7ee7b782-13cb-4792-8297-99b4c745babd-kube-api-access-l9fc9\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.872147 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-client-ca\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.873340 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-client-ca\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.873539 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-config\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.876234 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee7b782-13cb-4792-8297-99b4c745babd-serving-cert\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.879666 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ee7b782-13cb-4792-8297-99b4c745babd-proxy-ca-bundles\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:41 crc kubenswrapper[4687]: I0131 06:50:41.902289 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9fc9\" (UniqueName: \"kubernetes.io/projected/7ee7b782-13cb-4792-8297-99b4c745babd-kube-api-access-l9fc9\") pod \"controller-manager-7f956cb9bd-86fg4\" (UID: \"7ee7b782-13cb-4792-8297-99b4c745babd\") " pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:42 crc kubenswrapper[4687]: I0131 06:50:42.031558 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:42 crc kubenswrapper[4687]: I0131 06:50:42.502335 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f956cb9bd-86fg4"] Jan 31 06:50:42 crc kubenswrapper[4687]: I0131 06:50:42.746611 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" event={"ID":"7ee7b782-13cb-4792-8297-99b4c745babd","Type":"ContainerStarted","Data":"48155389c18848c6e0f935f34da096e85742e564754eaafe4eb7fedfa1eeff22"} Jan 31 06:50:43 crc kubenswrapper[4687]: I0131 06:50:43.616345 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea6afbaa-a516-45e0-bbd8-199b879e2654" path="/var/lib/kubelet/pods/ea6afbaa-a516-45e0-bbd8-199b879e2654/volumes" Jan 31 06:50:44 crc kubenswrapper[4687]: I0131 06:50:44.765459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" event={"ID":"7ee7b782-13cb-4792-8297-99b4c745babd","Type":"ContainerStarted","Data":"80f60c287e92dc02d95d3bfd1395f3e3bef6f2605c8e197f6725da71cad6f021"} Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.687153 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d7587476c-9hkl4"] Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.688222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.691821 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.691889 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.691821 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.692335 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.692737 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.693651 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.695479 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.695992 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.696029 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.696038 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.706275 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d7587476c-9hkl4"] Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.716195 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.716574 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.747263 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.747900 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.755293 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.771026 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.777114 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.803405 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f956cb9bd-86fg4" podStartSLOduration=8.803384114 podStartE2EDuration="8.803384114s" podCreationTimestamp="2026-01-31 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:50:45.794539734 +0000 UTC m=+472.071799319" watchObservedRunningTime="2026-01-31 06:50:45.803384114 +0000 UTC m=+472.080643689" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.837959 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838277 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838302 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838323 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-audit-policies\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838571 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838627 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838655 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6gmj\" (UniqueName: \"kubernetes.io/projected/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-kube-api-access-k6gmj\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-audit-dir\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838870 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-error\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838932 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838955 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-login\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838973 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-session\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.838992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940099 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940531 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940722 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-audit-policies\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940850 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6gmj\" (UniqueName: \"kubernetes.io/projected/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-kube-api-access-k6gmj\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940897 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-audit-dir\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-error\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.940998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.941023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-login\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.941059 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-session\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.941432 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.942218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.942272 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-audit-dir\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.942721 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.943381 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-audit-policies\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.946193 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-session\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.946247 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.946364 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.946770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-login\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.946919 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.947719 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.948712 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-template-error\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.949213 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:45 crc kubenswrapper[4687]: I0131 06:50:45.968352 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6gmj\" (UniqueName: \"kubernetes.io/projected/f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9-kube-api-access-k6gmj\") pod \"oauth-openshift-d7587476c-9hkl4\" (UID: \"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9\") " pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:46 crc kubenswrapper[4687]: I0131 06:50:46.050192 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:46 crc kubenswrapper[4687]: W0131 06:50:46.472837 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2a0c14c_2aa1_40fe_ac20_ebc15a27d8c9.slice/crio-674e03971ce1c02290f819318f815a5ab937a484bccdb1d7dad58549688e28d7 WatchSource:0}: Error finding container 674e03971ce1c02290f819318f815a5ab937a484bccdb1d7dad58549688e28d7: Status 404 returned error can't find the container with id 674e03971ce1c02290f819318f815a5ab937a484bccdb1d7dad58549688e28d7 Jan 31 06:50:46 crc kubenswrapper[4687]: I0131 06:50:46.473556 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d7587476c-9hkl4"] Jan 31 06:50:46 crc kubenswrapper[4687]: I0131 06:50:46.781655 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" event={"ID":"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9","Type":"ContainerStarted","Data":"674e03971ce1c02290f819318f815a5ab937a484bccdb1d7dad58549688e28d7"} Jan 31 06:50:47 crc kubenswrapper[4687]: I0131 06:50:47.790234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" event={"ID":"f2a0c14c-2aa1-40fe-ac20-ebc15a27d8c9","Type":"ContainerStarted","Data":"c3bfed31944bd8e1aefdcf47ed535df6bda73172ab63f3ca4404f14a8dcca279"} Jan 31 06:50:47 crc kubenswrapper[4687]: I0131 06:50:47.790797 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:47 crc kubenswrapper[4687]: I0131 06:50:47.801759 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" Jan 31 06:50:47 crc kubenswrapper[4687]: I0131 06:50:47.819200 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d7587476c-9hkl4" podStartSLOduration=32.819174096 podStartE2EDuration="32.819174096s" podCreationTimestamp="2026-01-31 06:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:50:47.817320936 +0000 UTC m=+474.094580551" watchObservedRunningTime="2026-01-31 06:50:47.819174096 +0000 UTC m=+474.096433711" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.345353 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6tt8"] Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.346319 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6tt8" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="registry-server" containerID="cri-o://444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409" gracePeriod=30 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.359522 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6md9"] Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.360255 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g6md9" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="registry-server" containerID="cri-o://3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0" gracePeriod=30 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.366675 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c27wp"] Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.366925 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" containerID="cri-o://209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d" gracePeriod=30 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.375248 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmd6"] Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.375559 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kpmd6" podUID="fe701715-9a81-4ba7-be4b-f52834728547" containerName="registry-server" containerID="cri-o://f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245" gracePeriod=30 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.379651 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7f5g"] Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.379897 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7f5g" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="registry-server" containerID="cri-o://01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2" gracePeriod=30 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.386797 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ff2sf"] Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.387499 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.400322 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ff2sf"] Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.468314 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d11e6dc8-1dc0-442d-951a-b3c6613f938f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.468365 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d11e6dc8-1dc0-442d-951a-b3c6613f938f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.468427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dkwf\" (UniqueName: \"kubernetes.io/projected/d11e6dc8-1dc0-442d-951a-b3c6613f938f-kube-api-access-7dkwf\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.569085 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d11e6dc8-1dc0-442d-951a-b3c6613f938f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.569124 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d11e6dc8-1dc0-442d-951a-b3c6613f938f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.569158 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dkwf\" (UniqueName: \"kubernetes.io/projected/d11e6dc8-1dc0-442d-951a-b3c6613f938f-kube-api-access-7dkwf\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.571302 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d11e6dc8-1dc0-442d-951a-b3c6613f938f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.576878 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d11e6dc8-1dc0-442d-951a-b3c6613f938f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.588460 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dkwf\" (UniqueName: \"kubernetes.io/projected/d11e6dc8-1dc0-442d-951a-b3c6613f938f-kube-api-access-7dkwf\") pod \"marketplace-operator-79b997595-ff2sf\" (UID: \"d11e6dc8-1dc0-442d-951a-b3c6613f938f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.707143 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.819604 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c27wp_175a043a-d6f7-4c39-953b-560986f36646/marketplace-operator/1.log" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.819893 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.873093 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wqtq\" (UniqueName: \"kubernetes.io/projected/175a043a-d6f7-4c39-953b-560986f36646-kube-api-access-5wqtq\") pod \"175a043a-d6f7-4c39-953b-560986f36646\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.873214 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics\") pod \"175a043a-d6f7-4c39-953b-560986f36646\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.873434 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca\") pod \"175a043a-d6f7-4c39-953b-560986f36646\" (UID: \"175a043a-d6f7-4c39-953b-560986f36646\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.874769 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "175a043a-d6f7-4c39-953b-560986f36646" (UID: "175a043a-d6f7-4c39-953b-560986f36646"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.881256 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "175a043a-d6f7-4c39-953b-560986f36646" (UID: "175a043a-d6f7-4c39-953b-560986f36646"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.885849 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175a043a-d6f7-4c39-953b-560986f36646-kube-api-access-5wqtq" (OuterVolumeSpecName: "kube-api-access-5wqtq") pod "175a043a-d6f7-4c39-953b-560986f36646" (UID: "175a043a-d6f7-4c39-953b-560986f36646"). InnerVolumeSpecName "kube-api-access-5wqtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.915042 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.919080 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.945447 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.954514 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975029 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-utilities\") pod \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975196 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-utilities\") pod \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975258 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzthl\" (UniqueName: \"kubernetes.io/projected/fe701715-9a81-4ba7-be4b-f52834728547-kube-api-access-pzthl\") pod \"fe701715-9a81-4ba7-be4b-f52834728547\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975286 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-utilities\") pod \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975314 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-catalog-content\") pod \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975340 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pcn4\" (UniqueName: \"kubernetes.io/projected/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-kube-api-access-2pcn4\") pod \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975360 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-catalog-content\") pod \"fe701715-9a81-4ba7-be4b-f52834728547\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975385 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-catalog-content\") pod \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975426 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-catalog-content\") pod \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\" (UID: \"2a8064f7-2493-4fd0-a460-9d98ebdd1a24\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975458 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kb7j\" (UniqueName: \"kubernetes.io/projected/3b4dc04b-0379-4855-8b63-4ef29d0d6647-kube-api-access-7kb7j\") pod \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\" (UID: \"3b4dc04b-0379-4855-8b63-4ef29d0d6647\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975481 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-utilities\") pod \"fe701715-9a81-4ba7-be4b-f52834728547\" (UID: \"fe701715-9a81-4ba7-be4b-f52834728547\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975532 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs85t\" (UniqueName: \"kubernetes.io/projected/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-kube-api-access-vs85t\") pod \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\" (UID: \"12638a02-8cb5-4367-a17a-fc50a1d9ddfb\") " Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975669 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-utilities" (OuterVolumeSpecName: "utilities") pod "2a8064f7-2493-4fd0-a460-9d98ebdd1a24" (UID: "2a8064f7-2493-4fd0-a460-9d98ebdd1a24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975967 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wqtq\" (UniqueName: \"kubernetes.io/projected/175a043a-d6f7-4c39-953b-560986f36646-kube-api-access-5wqtq\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.975988 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/175a043a-d6f7-4c39-953b-560986f36646-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.976001 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/175a043a-d6f7-4c39-953b-560986f36646-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.976013 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.976099 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-utilities" (OuterVolumeSpecName: "utilities") pod "3b4dc04b-0379-4855-8b63-4ef29d0d6647" (UID: "3b4dc04b-0379-4855-8b63-4ef29d0d6647"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.977026 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-utilities" (OuterVolumeSpecName: "utilities") pod "12638a02-8cb5-4367-a17a-fc50a1d9ddfb" (UID: "12638a02-8cb5-4367-a17a-fc50a1d9ddfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.977803 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-utilities" (OuterVolumeSpecName: "utilities") pod "fe701715-9a81-4ba7-be4b-f52834728547" (UID: "fe701715-9a81-4ba7-be4b-f52834728547"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.982087 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe701715-9a81-4ba7-be4b-f52834728547-kube-api-access-pzthl" (OuterVolumeSpecName: "kube-api-access-pzthl") pod "fe701715-9a81-4ba7-be4b-f52834728547" (UID: "fe701715-9a81-4ba7-be4b-f52834728547"). InnerVolumeSpecName "kube-api-access-pzthl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.982593 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4dc04b-0379-4855-8b63-4ef29d0d6647-kube-api-access-7kb7j" (OuterVolumeSpecName: "kube-api-access-7kb7j") pod "3b4dc04b-0379-4855-8b63-4ef29d0d6647" (UID: "3b4dc04b-0379-4855-8b63-4ef29d0d6647"). InnerVolumeSpecName "kube-api-access-7kb7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.986879 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-kube-api-access-vs85t" (OuterVolumeSpecName: "kube-api-access-vs85t") pod "12638a02-8cb5-4367-a17a-fc50a1d9ddfb" (UID: "12638a02-8cb5-4367-a17a-fc50a1d9ddfb"). InnerVolumeSpecName "kube-api-access-vs85t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.986935 4687 generic.go:334] "Generic (PLEG): container finished" podID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerID="3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0" exitCode=0 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.987003 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6md9" event={"ID":"12638a02-8cb5-4367-a17a-fc50a1d9ddfb","Type":"ContainerDied","Data":"3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.987022 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6md9" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.987033 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6md9" event={"ID":"12638a02-8cb5-4367-a17a-fc50a1d9ddfb","Type":"ContainerDied","Data":"277e14c048081285d84cb6f2fd0a83fcf9686efa8b05b16d7b7d90663d347f7f"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.987052 4687 scope.go:117] "RemoveContainer" containerID="3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.989065 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c27wp_175a043a-d6f7-4c39-953b-560986f36646/marketplace-operator/1.log" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.989102 4687 generic.go:334] "Generic (PLEG): container finished" podID="175a043a-d6f7-4c39-953b-560986f36646" containerID="209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d" exitCode=0 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.989167 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" event={"ID":"175a043a-d6f7-4c39-953b-560986f36646","Type":"ContainerDied","Data":"209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.989211 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" event={"ID":"175a043a-d6f7-4c39-953b-560986f36646","Type":"ContainerDied","Data":"426e81ebf509272df08f08fd3e88299429a51833f2177ddfbed9160cee4eca3e"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.989303 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c27wp" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.991468 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerID="01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2" exitCode=0 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.991525 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7f5g" event={"ID":"3b4dc04b-0379-4855-8b63-4ef29d0d6647","Type":"ContainerDied","Data":"01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.991547 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7f5g" event={"ID":"3b4dc04b-0379-4855-8b63-4ef29d0d6647","Type":"ContainerDied","Data":"f4eb7e3048a747dca1d56184f43180e8ecec6eb3e5c7989594c986251c745e91"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.991616 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7f5g" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.993343 4687 generic.go:334] "Generic (PLEG): container finished" podID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerID="444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409" exitCode=0 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.993385 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6tt8" event={"ID":"2a8064f7-2493-4fd0-a460-9d98ebdd1a24","Type":"ContainerDied","Data":"444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.993427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6tt8" event={"ID":"2a8064f7-2493-4fd0-a460-9d98ebdd1a24","Type":"ContainerDied","Data":"eac937ee3a418174cb5dfdf245797bcd483c4f9d36220269586ba95c6bbffad9"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.993490 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6tt8" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.994873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-kube-api-access-2pcn4" (OuterVolumeSpecName: "kube-api-access-2pcn4") pod "2a8064f7-2493-4fd0-a460-9d98ebdd1a24" (UID: "2a8064f7-2493-4fd0-a460-9d98ebdd1a24"). InnerVolumeSpecName "kube-api-access-2pcn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.995334 4687 generic.go:334] "Generic (PLEG): container finished" podID="fe701715-9a81-4ba7-be4b-f52834728547" containerID="f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245" exitCode=0 Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.995362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmd6" event={"ID":"fe701715-9a81-4ba7-be4b-f52834728547","Type":"ContainerDied","Data":"f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.995378 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmd6" event={"ID":"fe701715-9a81-4ba7-be4b-f52834728547","Type":"ContainerDied","Data":"f067e37ed712378fd5421bd8c46994c76110f9524c3f3c2e6d2bc37088c3a0ea"} Jan 31 06:51:21 crc kubenswrapper[4687]: I0131 06:51:21.995725 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpmd6" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.013484 4687 scope.go:117] "RemoveContainer" containerID="1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.033076 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c27wp"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.036582 4687 scope.go:117] "RemoveContainer" containerID="b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.038306 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe701715-9a81-4ba7-be4b-f52834728547" (UID: "fe701715-9a81-4ba7-be4b-f52834728547"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.040021 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c27wp"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.050532 4687 scope.go:117] "RemoveContainer" containerID="3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.050921 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0\": container with ID starting with 3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0 not found: ID does not exist" containerID="3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.050962 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0"} err="failed to get container status \"3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0\": rpc error: code = NotFound desc = could not find container \"3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0\": container with ID starting with 3033cd4b2a677fb6257cf8b258c1bad60b6c0ab566b0edfffcf094959e3697a0 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.050994 4687 scope.go:117] "RemoveContainer" containerID="1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.052694 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833\": container with ID starting with 1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833 not found: ID does not exist" containerID="1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.052747 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833"} err="failed to get container status \"1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833\": rpc error: code = NotFound desc = could not find container \"1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833\": container with ID starting with 1fe7d45d7776598cb69b0058dd2cfa6273068f12519ae1f848c8035ea8292833 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.052784 4687 scope.go:117] "RemoveContainer" containerID="b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.053131 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe\": container with ID starting with b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe not found: ID does not exist" containerID="b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.053160 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe"} err="failed to get container status \"b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe\": rpc error: code = NotFound desc = could not find container \"b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe\": container with ID starting with b180d793b305e1696845acc3a2a8155b7fe53d6a6080d0b6f2070bf5b8a09dfe not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.053182 4687 scope.go:117] "RemoveContainer" containerID="209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.069221 4687 scope.go:117] "RemoveContainer" containerID="2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.073568 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a8064f7-2493-4fd0-a460-9d98ebdd1a24" (UID: "2a8064f7-2493-4fd0-a460-9d98ebdd1a24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076585 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs85t\" (UniqueName: \"kubernetes.io/projected/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-kube-api-access-vs85t\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076614 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076628 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzthl\" (UniqueName: \"kubernetes.io/projected/fe701715-9a81-4ba7-be4b-f52834728547-kube-api-access-pzthl\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076639 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076649 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pcn4\" (UniqueName: \"kubernetes.io/projected/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-kube-api-access-2pcn4\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076660 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076705 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a8064f7-2493-4fd0-a460-9d98ebdd1a24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076718 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kb7j\" (UniqueName: \"kubernetes.io/projected/3b4dc04b-0379-4855-8b63-4ef29d0d6647-kube-api-access-7kb7j\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.076729 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe701715-9a81-4ba7-be4b-f52834728547-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.078830 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12638a02-8cb5-4367-a17a-fc50a1d9ddfb" (UID: "12638a02-8cb5-4367-a17a-fc50a1d9ddfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.085056 4687 scope.go:117] "RemoveContainer" containerID="209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.085591 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d\": container with ID starting with 209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d not found: ID does not exist" containerID="209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.085631 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d"} err="failed to get container status \"209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d\": rpc error: code = NotFound desc = could not find container \"209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d\": container with ID starting with 209ea933cb20ba08ade450c7c3dc23bbfe777903ce8323c2209d4c454b37809d not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.085718 4687 scope.go:117] "RemoveContainer" containerID="2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.086018 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156\": container with ID starting with 2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156 not found: ID does not exist" containerID="2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.086043 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156"} err="failed to get container status \"2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156\": rpc error: code = NotFound desc = could not find container \"2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156\": container with ID starting with 2f767996d1145a87fdf9c2618403a2575ba71f616d9df92b64a695977acdd156 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.086060 4687 scope.go:117] "RemoveContainer" containerID="01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.099807 4687 scope.go:117] "RemoveContainer" containerID="980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.113871 4687 scope.go:117] "RemoveContainer" containerID="2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.132725 4687 scope.go:117] "RemoveContainer" containerID="01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.133155 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2\": container with ID starting with 01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2 not found: ID does not exist" containerID="01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.133190 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2"} err="failed to get container status \"01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2\": rpc error: code = NotFound desc = could not find container \"01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2\": container with ID starting with 01c4a5d3351262379b031001ded0849a5bea4d726bda12859102be3f6a0583e2 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.133216 4687 scope.go:117] "RemoveContainer" containerID="980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.133466 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28\": container with ID starting with 980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28 not found: ID does not exist" containerID="980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.133495 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28"} err="failed to get container status \"980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28\": rpc error: code = NotFound desc = could not find container \"980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28\": container with ID starting with 980572ecd2fad3fc43f43f0f7395cb076ee0e19c2646d8deda69d6c359ebee28 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.133510 4687 scope.go:117] "RemoveContainer" containerID="2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.133793 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351\": container with ID starting with 2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351 not found: ID does not exist" containerID="2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.133814 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351"} err="failed to get container status \"2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351\": rpc error: code = NotFound desc = could not find container \"2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351\": container with ID starting with 2a221d0b637be3284cc0931ec113911e071f3c104ef021a6c6b5e2bcced3b351 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.133830 4687 scope.go:117] "RemoveContainer" containerID="444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.146227 4687 scope.go:117] "RemoveContainer" containerID="942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.156970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b4dc04b-0379-4855-8b63-4ef29d0d6647" (UID: "3b4dc04b-0379-4855-8b63-4ef29d0d6647"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.164705 4687 scope.go:117] "RemoveContainer" containerID="6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.179111 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4dc04b-0379-4855-8b63-4ef29d0d6647-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.179142 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12638a02-8cb5-4367-a17a-fc50a1d9ddfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.183234 4687 scope.go:117] "RemoveContainer" containerID="444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.183281 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ff2sf"] Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.183726 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409\": container with ID starting with 444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409 not found: ID does not exist" containerID="444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.183787 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409"} err="failed to get container status \"444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409\": rpc error: code = NotFound desc = could not find container \"444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409\": container with ID starting with 444d9b4d1b1beef90576ca3065a041bc9b9e1b0fbb25f4e281e5b66971e98409 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.183820 4687 scope.go:117] "RemoveContainer" containerID="942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.184154 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b\": container with ID starting with 942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b not found: ID does not exist" containerID="942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.184184 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b"} err="failed to get container status \"942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b\": rpc error: code = NotFound desc = could not find container \"942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b\": container with ID starting with 942c180d7e2b5cc1c25a6022719217b24af048cf2e5bff29c803aae25c76c47b not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.184211 4687 scope.go:117] "RemoveContainer" containerID="6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.184458 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90\": container with ID starting with 6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90 not found: ID does not exist" containerID="6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.184481 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90"} err="failed to get container status \"6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90\": rpc error: code = NotFound desc = could not find container \"6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90\": container with ID starting with 6a88c31149242f79c112b59ca761b6760fe4e37a2f8fed5cdc060b48c8a43c90 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.184496 4687 scope.go:117] "RemoveContainer" containerID="f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.198582 4687 scope.go:117] "RemoveContainer" containerID="2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.215083 4687 scope.go:117] "RemoveContainer" containerID="27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.229365 4687 scope.go:117] "RemoveContainer" containerID="f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.229879 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245\": container with ID starting with f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245 not found: ID does not exist" containerID="f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.229927 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245"} err="failed to get container status \"f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245\": rpc error: code = NotFound desc = could not find container \"f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245\": container with ID starting with f4a4c9c7a93540bb34a64d9a1c26bbfd6af95cd1ba0a331a9287013e42597245 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.229986 4687 scope.go:117] "RemoveContainer" containerID="2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.230597 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b\": container with ID starting with 2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b not found: ID does not exist" containerID="2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.230640 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b"} err="failed to get container status \"2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b\": rpc error: code = NotFound desc = could not find container \"2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b\": container with ID starting with 2fcdfd4d462828444fecc64e3671ccaa2781518f71074862af86d86b723f991b not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.230669 4687 scope.go:117] "RemoveContainer" containerID="27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622" Jan 31 06:51:22 crc kubenswrapper[4687]: E0131 06:51:22.231130 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622\": container with ID starting with 27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622 not found: ID does not exist" containerID="27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.231173 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622"} err="failed to get container status \"27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622\": rpc error: code = NotFound desc = could not find container \"27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622\": container with ID starting with 27e93ea1fb8337dce0d9e61b9e404dba2e6980c4ce0de5d7b5e49f5234e5f622 not found: ID does not exist" Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.317621 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6md9"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.320245 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g6md9"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.328499 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7f5g"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.331653 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7f5g"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.339594 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6tt8"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.344002 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6tt8"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.353103 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmd6"] Jan 31 06:51:22 crc kubenswrapper[4687]: I0131 06:51:22.356036 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmd6"] Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.003282 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" event={"ID":"d11e6dc8-1dc0-442d-951a-b3c6613f938f","Type":"ContainerStarted","Data":"6b9cf1d124a9847fe188451259924f72257a55b6f6dd4e795c51599154ae8de4"} Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.003641 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.003657 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" event={"ID":"d11e6dc8-1dc0-442d-951a-b3c6613f938f","Type":"ContainerStarted","Data":"beeab361f7cf482fdf470ba4715ab71f614c853b16de9ac2d79bdf473479274c"} Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.006801 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.019535 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ff2sf" podStartSLOduration=2.019513785 podStartE2EDuration="2.019513785s" podCreationTimestamp="2026-01-31 06:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:51:23.017897672 +0000 UTC m=+509.295157247" watchObservedRunningTime="2026-01-31 06:51:23.019513785 +0000 UTC m=+509.296773360" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558324 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7cnml"] Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558588 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe701715-9a81-4ba7-be4b-f52834728547" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558605 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe701715-9a81-4ba7-be4b-f52834728547" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558625 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="extract-content" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558633 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="extract-content" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558645 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558652 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558665 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="extract-utilities" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558671 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="extract-utilities" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558682 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe701715-9a81-4ba7-be4b-f52834728547" containerName="extract-content" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558690 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe701715-9a81-4ba7-be4b-f52834728547" containerName="extract-content" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558699 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558706 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558719 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe701715-9a81-4ba7-be4b-f52834728547" containerName="extract-utilities" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558726 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe701715-9a81-4ba7-be4b-f52834728547" containerName="extract-utilities" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558739 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="extract-content" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558747 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="extract-content" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558758 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558766 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558775 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="extract-utilities" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558783 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="extract-utilities" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558792 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="extract-utilities" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558800 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="extract-utilities" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558811 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558819 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558829 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558836 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558848 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="extract-content" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558855 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="extract-content" Jan 31 06:51:23 crc kubenswrapper[4687]: E0131 06:51:23.558865 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558872 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558969 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558984 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe701715-9a81-4ba7-be4b-f52834728547" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.558995 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.559005 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.559014 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="175a043a-d6f7-4c39-953b-560986f36646" containerName="marketplace-operator" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.559027 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.559036 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" containerName="registry-server" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.559893 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.566670 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cnml"] Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.567405 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.609532 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12638a02-8cb5-4367-a17a-fc50a1d9ddfb" path="/var/lib/kubelet/pods/12638a02-8cb5-4367-a17a-fc50a1d9ddfb/volumes" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.610678 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175a043a-d6f7-4c39-953b-560986f36646" path="/var/lib/kubelet/pods/175a043a-d6f7-4c39-953b-560986f36646/volumes" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.611322 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a8064f7-2493-4fd0-a460-9d98ebdd1a24" path="/var/lib/kubelet/pods/2a8064f7-2493-4fd0-a460-9d98ebdd1a24/volumes" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.612702 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4dc04b-0379-4855-8b63-4ef29d0d6647" path="/var/lib/kubelet/pods/3b4dc04b-0379-4855-8b63-4ef29d0d6647/volumes" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.613470 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe701715-9a81-4ba7-be4b-f52834728547" path="/var/lib/kubelet/pods/fe701715-9a81-4ba7-be4b-f52834728547/volumes" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.696675 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ff7f6a-0a52-4206-9fe1-5177e900634b-utilities\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.696727 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ff7f6a-0a52-4206-9fe1-5177e900634b-catalog-content\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.696778 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9kt\" (UniqueName: \"kubernetes.io/projected/48ff7f6a-0a52-4206-9fe1-5177e900634b-kube-api-access-5k9kt\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.754571 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5vl6d"] Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.756460 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.759956 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.774719 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5vl6d"] Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.797823 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klv54\" (UniqueName: \"kubernetes.io/projected/4cce49bf-11b5-4c33-b241-b829e91eb9a2-kube-api-access-klv54\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.797889 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cce49bf-11b5-4c33-b241-b829e91eb9a2-catalog-content\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.797967 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cce49bf-11b5-4c33-b241-b829e91eb9a2-utilities\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.798006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ff7f6a-0a52-4206-9fe1-5177e900634b-utilities\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.798031 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ff7f6a-0a52-4206-9fe1-5177e900634b-catalog-content\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.798050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k9kt\" (UniqueName: \"kubernetes.io/projected/48ff7f6a-0a52-4206-9fe1-5177e900634b-kube-api-access-5k9kt\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.798597 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ff7f6a-0a52-4206-9fe1-5177e900634b-utilities\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.798715 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ff7f6a-0a52-4206-9fe1-5177e900634b-catalog-content\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.815378 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k9kt\" (UniqueName: \"kubernetes.io/projected/48ff7f6a-0a52-4206-9fe1-5177e900634b-kube-api-access-5k9kt\") pod \"redhat-marketplace-7cnml\" (UID: \"48ff7f6a-0a52-4206-9fe1-5177e900634b\") " pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.879967 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.898611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klv54\" (UniqueName: \"kubernetes.io/projected/4cce49bf-11b5-4c33-b241-b829e91eb9a2-kube-api-access-klv54\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.898658 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cce49bf-11b5-4c33-b241-b829e91eb9a2-catalog-content\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.898703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cce49bf-11b5-4c33-b241-b829e91eb9a2-utilities\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.899117 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cce49bf-11b5-4c33-b241-b829e91eb9a2-utilities\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.899426 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cce49bf-11b5-4c33-b241-b829e91eb9a2-catalog-content\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:23 crc kubenswrapper[4687]: I0131 06:51:23.918537 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klv54\" (UniqueName: \"kubernetes.io/projected/4cce49bf-11b5-4c33-b241-b829e91eb9a2-kube-api-access-klv54\") pod \"redhat-operators-5vl6d\" (UID: \"4cce49bf-11b5-4c33-b241-b829e91eb9a2\") " pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:24 crc kubenswrapper[4687]: I0131 06:51:24.109736 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:24 crc kubenswrapper[4687]: I0131 06:51:24.282067 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cnml"] Jan 31 06:51:24 crc kubenswrapper[4687]: W0131 06:51:24.288812 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48ff7f6a_0a52_4206_9fe1_5177e900634b.slice/crio-95dfed84e9166ca17ef1919d6b5f67e85057e31cc53a89dee76aa9d90839b2a5 WatchSource:0}: Error finding container 95dfed84e9166ca17ef1919d6b5f67e85057e31cc53a89dee76aa9d90839b2a5: Status 404 returned error can't find the container with id 95dfed84e9166ca17ef1919d6b5f67e85057e31cc53a89dee76aa9d90839b2a5 Jan 31 06:51:24 crc kubenswrapper[4687]: I0131 06:51:24.489809 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5vl6d"] Jan 31 06:51:24 crc kubenswrapper[4687]: W0131 06:51:24.516189 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce49bf_11b5_4c33_b241_b829e91eb9a2.slice/crio-5f79d3244a212b2c756eaa98c12a7301539ae896dd86dd72dc4f9d2bc24838d2 WatchSource:0}: Error finding container 5f79d3244a212b2c756eaa98c12a7301539ae896dd86dd72dc4f9d2bc24838d2: Status 404 returned error can't find the container with id 5f79d3244a212b2c756eaa98c12a7301539ae896dd86dd72dc4f9d2bc24838d2 Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.022205 4687 generic.go:334] "Generic (PLEG): container finished" podID="48ff7f6a-0a52-4206-9fe1-5177e900634b" containerID="623837d33d44ce464782a449ead40d01d0a88257d2a57b5795b22a4b33cc8cee" exitCode=0 Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.022260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cnml" event={"ID":"48ff7f6a-0a52-4206-9fe1-5177e900634b","Type":"ContainerDied","Data":"623837d33d44ce464782a449ead40d01d0a88257d2a57b5795b22a4b33cc8cee"} Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.022319 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cnml" event={"ID":"48ff7f6a-0a52-4206-9fe1-5177e900634b","Type":"ContainerStarted","Data":"95dfed84e9166ca17ef1919d6b5f67e85057e31cc53a89dee76aa9d90839b2a5"} Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.023891 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.024615 4687 generic.go:334] "Generic (PLEG): container finished" podID="4cce49bf-11b5-4c33-b241-b829e91eb9a2" containerID="47011d6fc07748e8830df9496dc490339508a5b3180174508af05bb253a65513" exitCode=0 Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.024671 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vl6d" event={"ID":"4cce49bf-11b5-4c33-b241-b829e91eb9a2","Type":"ContainerDied","Data":"47011d6fc07748e8830df9496dc490339508a5b3180174508af05bb253a65513"} Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.024708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vl6d" event={"ID":"4cce49bf-11b5-4c33-b241-b829e91eb9a2","Type":"ContainerStarted","Data":"5f79d3244a212b2c756eaa98c12a7301539ae896dd86dd72dc4f9d2bc24838d2"} Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.954349 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2zq5g"] Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.956011 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.957690 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 06:51:25 crc kubenswrapper[4687]: I0131 06:51:25.966630 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zq5g"] Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.025708 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824621bb-1ee0-4034-9dfc-d8bc3440757c-catalog-content\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.025809 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824621bb-1ee0-4034-9dfc-d8bc3440757c-utilities\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.025840 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh2bm\" (UniqueName: \"kubernetes.io/projected/824621bb-1ee0-4034-9dfc-d8bc3440757c-kube-api-access-xh2bm\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.033477 4687 generic.go:334] "Generic (PLEG): container finished" podID="48ff7f6a-0a52-4206-9fe1-5177e900634b" containerID="4052c76f4508ba86a23af09f85df66b757b0b85675a37b76fdf98029bc987640" exitCode=0 Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.033534 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cnml" event={"ID":"48ff7f6a-0a52-4206-9fe1-5177e900634b","Type":"ContainerDied","Data":"4052c76f4508ba86a23af09f85df66b757b0b85675a37b76fdf98029bc987640"} Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.036276 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vl6d" event={"ID":"4cce49bf-11b5-4c33-b241-b829e91eb9a2","Type":"ContainerStarted","Data":"6523df055d13118ac81230bc44c544cb8b3fcf0d472204c2b8bd33c67e8b8216"} Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.126739 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824621bb-1ee0-4034-9dfc-d8bc3440757c-utilities\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.126813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh2bm\" (UniqueName: \"kubernetes.io/projected/824621bb-1ee0-4034-9dfc-d8bc3440757c-kube-api-access-xh2bm\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.126852 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824621bb-1ee0-4034-9dfc-d8bc3440757c-catalog-content\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.127427 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824621bb-1ee0-4034-9dfc-d8bc3440757c-catalog-content\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.128825 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824621bb-1ee0-4034-9dfc-d8bc3440757c-utilities\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.149924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh2bm\" (UniqueName: \"kubernetes.io/projected/824621bb-1ee0-4034-9dfc-d8bc3440757c-kube-api-access-xh2bm\") pod \"community-operators-2zq5g\" (UID: \"824621bb-1ee0-4034-9dfc-d8bc3440757c\") " pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.167830 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nwgjc"] Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.168901 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.171572 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwgjc"] Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.171911 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.228560 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rs6b\" (UniqueName: \"kubernetes.io/projected/944e21b2-ebb1-48c3-aaa8-f0264981f380-kube-api-access-6rs6b\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.228604 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944e21b2-ebb1-48c3-aaa8-f0264981f380-catalog-content\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.228623 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944e21b2-ebb1-48c3-aaa8-f0264981f380-utilities\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.279885 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.329813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rs6b\" (UniqueName: \"kubernetes.io/projected/944e21b2-ebb1-48c3-aaa8-f0264981f380-kube-api-access-6rs6b\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.330231 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944e21b2-ebb1-48c3-aaa8-f0264981f380-catalog-content\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.330265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944e21b2-ebb1-48c3-aaa8-f0264981f380-utilities\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.331132 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944e21b2-ebb1-48c3-aaa8-f0264981f380-catalog-content\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.331181 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944e21b2-ebb1-48c3-aaa8-f0264981f380-utilities\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.349061 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rs6b\" (UniqueName: \"kubernetes.io/projected/944e21b2-ebb1-48c3-aaa8-f0264981f380-kube-api-access-6rs6b\") pod \"certified-operators-nwgjc\" (UID: \"944e21b2-ebb1-48c3-aaa8-f0264981f380\") " pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.485386 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.715385 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zq5g"] Jan 31 06:51:26 crc kubenswrapper[4687]: W0131 06:51:26.719901 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824621bb_1ee0_4034_9dfc_d8bc3440757c.slice/crio-1e75b1a83482022ce435dc072ac09472aedb03238cd0134ca754fe8f50fd50ba WatchSource:0}: Error finding container 1e75b1a83482022ce435dc072ac09472aedb03238cd0134ca754fe8f50fd50ba: Status 404 returned error can't find the container with id 1e75b1a83482022ce435dc072ac09472aedb03238cd0134ca754fe8f50fd50ba Jan 31 06:51:26 crc kubenswrapper[4687]: I0131 06:51:26.904249 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwgjc"] Jan 31 06:51:26 crc kubenswrapper[4687]: W0131 06:51:26.913168 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod944e21b2_ebb1_48c3_aaa8_f0264981f380.slice/crio-e44de3597f33c0b6afb4cee7fccde95d116647f850dfb894a2ec5c5bfc226932 WatchSource:0}: Error finding container e44de3597f33c0b6afb4cee7fccde95d116647f850dfb894a2ec5c5bfc226932: Status 404 returned error can't find the container with id e44de3597f33c0b6afb4cee7fccde95d116647f850dfb894a2ec5c5bfc226932 Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.050694 4687 generic.go:334] "Generic (PLEG): container finished" podID="4cce49bf-11b5-4c33-b241-b829e91eb9a2" containerID="6523df055d13118ac81230bc44c544cb8b3fcf0d472204c2b8bd33c67e8b8216" exitCode=0 Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.050765 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vl6d" event={"ID":"4cce49bf-11b5-4c33-b241-b829e91eb9a2","Type":"ContainerDied","Data":"6523df055d13118ac81230bc44c544cb8b3fcf0d472204c2b8bd33c67e8b8216"} Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.054960 4687 generic.go:334] "Generic (PLEG): container finished" podID="824621bb-1ee0-4034-9dfc-d8bc3440757c" containerID="66f82dc64c1b394a01066180519eab0020997582a028018bdda98b11cf5feb93" exitCode=0 Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.055032 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zq5g" event={"ID":"824621bb-1ee0-4034-9dfc-d8bc3440757c","Type":"ContainerDied","Data":"66f82dc64c1b394a01066180519eab0020997582a028018bdda98b11cf5feb93"} Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.055056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zq5g" event={"ID":"824621bb-1ee0-4034-9dfc-d8bc3440757c","Type":"ContainerStarted","Data":"1e75b1a83482022ce435dc072ac09472aedb03238cd0134ca754fe8f50fd50ba"} Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.058033 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwgjc" event={"ID":"944e21b2-ebb1-48c3-aaa8-f0264981f380","Type":"ContainerStarted","Data":"19bffa06407f6b07959a1f124a2db0f868edfeecf3087e751fbfb1e135e4dc0b"} Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.058055 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwgjc" event={"ID":"944e21b2-ebb1-48c3-aaa8-f0264981f380","Type":"ContainerStarted","Data":"e44de3597f33c0b6afb4cee7fccde95d116647f850dfb894a2ec5c5bfc226932"} Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.062003 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cnml" event={"ID":"48ff7f6a-0a52-4206-9fe1-5177e900634b","Type":"ContainerStarted","Data":"55d3548c40a4ea487aca34227ca971b0c52750731101c14e24aaeedb5dbaf116"} Jan 31 06:51:27 crc kubenswrapper[4687]: I0131 06:51:27.111149 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7cnml" podStartSLOduration=2.648952321 podStartE2EDuration="4.111127039s" podCreationTimestamp="2026-01-31 06:51:23 +0000 UTC" firstStartedPulling="2026-01-31 06:51:25.023627981 +0000 UTC m=+511.300887556" lastFinishedPulling="2026-01-31 06:51:26.485802699 +0000 UTC m=+512.763062274" observedRunningTime="2026-01-31 06:51:27.106092202 +0000 UTC m=+513.383351857" watchObservedRunningTime="2026-01-31 06:51:27.111127039 +0000 UTC m=+513.388386634" Jan 31 06:51:28 crc kubenswrapper[4687]: I0131 06:51:28.070115 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vl6d" event={"ID":"4cce49bf-11b5-4c33-b241-b829e91eb9a2","Type":"ContainerStarted","Data":"ea6fb2d2636c365eec3d96ac27152e9b7a94a3fdf58c51627d8415b8d55a977e"} Jan 31 06:51:28 crc kubenswrapper[4687]: I0131 06:51:28.071639 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zq5g" event={"ID":"824621bb-1ee0-4034-9dfc-d8bc3440757c","Type":"ContainerStarted","Data":"40742de14011d80f7f7444af0f91857078f25008c7657b76fbec26f61dc13ef2"} Jan 31 06:51:28 crc kubenswrapper[4687]: I0131 06:51:28.073539 4687 generic.go:334] "Generic (PLEG): container finished" podID="944e21b2-ebb1-48c3-aaa8-f0264981f380" containerID="19bffa06407f6b07959a1f124a2db0f868edfeecf3087e751fbfb1e135e4dc0b" exitCode=0 Jan 31 06:51:28 crc kubenswrapper[4687]: I0131 06:51:28.074019 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwgjc" event={"ID":"944e21b2-ebb1-48c3-aaa8-f0264981f380","Type":"ContainerDied","Data":"19bffa06407f6b07959a1f124a2db0f868edfeecf3087e751fbfb1e135e4dc0b"} Jan 31 06:51:28 crc kubenswrapper[4687]: I0131 06:51:28.089016 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5vl6d" podStartSLOduration=2.669393785 podStartE2EDuration="5.088999025s" podCreationTimestamp="2026-01-31 06:51:23 +0000 UTC" firstStartedPulling="2026-01-31 06:51:25.025950104 +0000 UTC m=+511.303209679" lastFinishedPulling="2026-01-31 06:51:27.445555344 +0000 UTC m=+513.722814919" observedRunningTime="2026-01-31 06:51:28.085904241 +0000 UTC m=+514.363163816" watchObservedRunningTime="2026-01-31 06:51:28.088999025 +0000 UTC m=+514.366258600" Jan 31 06:51:29 crc kubenswrapper[4687]: I0131 06:51:29.086906 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwgjc" event={"ID":"944e21b2-ebb1-48c3-aaa8-f0264981f380","Type":"ContainerStarted","Data":"7fe083f2bab96c8c89f507407567e6b8be45f5c5f14488c4e472b7f521ddffbd"} Jan 31 06:51:29 crc kubenswrapper[4687]: I0131 06:51:29.091580 4687 generic.go:334] "Generic (PLEG): container finished" podID="824621bb-1ee0-4034-9dfc-d8bc3440757c" containerID="40742de14011d80f7f7444af0f91857078f25008c7657b76fbec26f61dc13ef2" exitCode=0 Jan 31 06:51:29 crc kubenswrapper[4687]: I0131 06:51:29.091670 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zq5g" event={"ID":"824621bb-1ee0-4034-9dfc-d8bc3440757c","Type":"ContainerDied","Data":"40742de14011d80f7f7444af0f91857078f25008c7657b76fbec26f61dc13ef2"} Jan 31 06:51:30 crc kubenswrapper[4687]: I0131 06:51:30.098257 4687 generic.go:334] "Generic (PLEG): container finished" podID="944e21b2-ebb1-48c3-aaa8-f0264981f380" containerID="7fe083f2bab96c8c89f507407567e6b8be45f5c5f14488c4e472b7f521ddffbd" exitCode=0 Jan 31 06:51:30 crc kubenswrapper[4687]: I0131 06:51:30.098299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwgjc" event={"ID":"944e21b2-ebb1-48c3-aaa8-f0264981f380","Type":"ContainerDied","Data":"7fe083f2bab96c8c89f507407567e6b8be45f5c5f14488c4e472b7f521ddffbd"} Jan 31 06:51:30 crc kubenswrapper[4687]: I0131 06:51:30.102552 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zq5g" event={"ID":"824621bb-1ee0-4034-9dfc-d8bc3440757c","Type":"ContainerStarted","Data":"a2d89a5cce26b7fa0cc95f4ebb7444985ba6a4e81e892e9df6de38746047ce4a"} Jan 31 06:51:31 crc kubenswrapper[4687]: I0131 06:51:31.109627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwgjc" event={"ID":"944e21b2-ebb1-48c3-aaa8-f0264981f380","Type":"ContainerStarted","Data":"4600017082a257bc3b679ff45b556ee04f4ad25b90824d45648ff95e64099bf8"} Jan 31 06:51:31 crc kubenswrapper[4687]: I0131 06:51:31.127803 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2zq5g" podStartSLOduration=3.693974302 podStartE2EDuration="6.127785888s" podCreationTimestamp="2026-01-31 06:51:25 +0000 UTC" firstStartedPulling="2026-01-31 06:51:27.056541507 +0000 UTC m=+513.333801082" lastFinishedPulling="2026-01-31 06:51:29.490353093 +0000 UTC m=+515.767612668" observedRunningTime="2026-01-31 06:51:30.146236202 +0000 UTC m=+516.423495777" watchObservedRunningTime="2026-01-31 06:51:31.127785888 +0000 UTC m=+517.405045463" Jan 31 06:51:31 crc kubenswrapper[4687]: I0131 06:51:31.128969 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nwgjc" podStartSLOduration=2.653604617 podStartE2EDuration="5.12896308s" podCreationTimestamp="2026-01-31 06:51:26 +0000 UTC" firstStartedPulling="2026-01-31 06:51:28.074881752 +0000 UTC m=+514.352141317" lastFinishedPulling="2026-01-31 06:51:30.550240205 +0000 UTC m=+516.827499780" observedRunningTime="2026-01-31 06:51:31.125620209 +0000 UTC m=+517.402879784" watchObservedRunningTime="2026-01-31 06:51:31.12896308 +0000 UTC m=+517.406222645" Jan 31 06:51:33 crc kubenswrapper[4687]: I0131 06:51:33.884607 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:33 crc kubenswrapper[4687]: I0131 06:51:33.885120 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:33 crc kubenswrapper[4687]: I0131 06:51:33.928445 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:34 crc kubenswrapper[4687]: I0131 06:51:34.110463 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:34 crc kubenswrapper[4687]: I0131 06:51:34.110519 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:34 crc kubenswrapper[4687]: I0131 06:51:34.162515 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7cnml" Jan 31 06:51:34 crc kubenswrapper[4687]: I0131 06:51:34.163041 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:34 crc kubenswrapper[4687]: I0131 06:51:34.202081 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5vl6d" Jan 31 06:51:36 crc kubenswrapper[4687]: I0131 06:51:36.281231 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:36 crc kubenswrapper[4687]: I0131 06:51:36.281287 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:36 crc kubenswrapper[4687]: I0131 06:51:36.344722 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:36 crc kubenswrapper[4687]: I0131 06:51:36.486597 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:36 crc kubenswrapper[4687]: I0131 06:51:36.486642 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:36 crc kubenswrapper[4687]: I0131 06:51:36.549183 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:37 crc kubenswrapper[4687]: I0131 06:51:37.178561 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2zq5g" Jan 31 06:51:37 crc kubenswrapper[4687]: I0131 06:51:37.179239 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nwgjc" Jan 31 06:51:55 crc kubenswrapper[4687]: I0131 06:51:55.804972 4687 scope.go:117] "RemoveContainer" containerID="ac3ae5422bf890f9d59028d983f7728ae5eadb459b5c6c4efa88116d4de8795b" Jan 31 06:51:58 crc kubenswrapper[4687]: I0131 06:51:58.684935 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:51:58 crc kubenswrapper[4687]: I0131 06:51:58.685255 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:52:28 crc kubenswrapper[4687]: I0131 06:52:28.684986 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:52:28 crc kubenswrapper[4687]: I0131 06:52:28.686208 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:52:58 crc kubenswrapper[4687]: I0131 06:52:58.684787 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:52:58 crc kubenswrapper[4687]: I0131 06:52:58.685271 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:52:58 crc kubenswrapper[4687]: I0131 06:52:58.685323 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:52:58 crc kubenswrapper[4687]: I0131 06:52:58.686088 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5db07448c568d30be8d0035977d79c95df6569fda6354ccd5bf27d59ac84ac4"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:52:58 crc kubenswrapper[4687]: I0131 06:52:58.686162 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://f5db07448c568d30be8d0035977d79c95df6569fda6354ccd5bf27d59ac84ac4" gracePeriod=600 Jan 31 06:52:59 crc kubenswrapper[4687]: I0131 06:52:59.560190 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="f5db07448c568d30be8d0035977d79c95df6569fda6354ccd5bf27d59ac84ac4" exitCode=0 Jan 31 06:52:59 crc kubenswrapper[4687]: I0131 06:52:59.560261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"f5db07448c568d30be8d0035977d79c95df6569fda6354ccd5bf27d59ac84ac4"} Jan 31 06:52:59 crc kubenswrapper[4687]: I0131 06:52:59.561146 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"bff6ab00d50f16002cddbf3c92b59320770b0e2b023430f0f4635f090395c87e"} Jan 31 06:52:59 crc kubenswrapper[4687]: I0131 06:52:59.561233 4687 scope.go:117] "RemoveContainer" containerID="fae6440a00bffd2c9912563b3a0133e343e7e89f2c4e7a9ccaeea3baa2211238" Jan 31 06:54:04 crc kubenswrapper[4687]: I0131 06:54:04.947583 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gjxcw"] Jan 31 06:54:04 crc kubenswrapper[4687]: I0131 06:54:04.949037 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:04 crc kubenswrapper[4687]: I0131 06:54:04.956664 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gjxcw"] Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.088481 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22f08ff3-a798-474e-9c0d-32232a35274e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.088547 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22f08ff3-a798-474e-9c0d-32232a35274e-registry-certificates\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.088587 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f08ff3-a798-474e-9c0d-32232a35274e-trusted-ca\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.088610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-bound-sa-token\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.088671 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-registry-tls\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.088783 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd68f\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-kube-api-access-xd68f\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.088809 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22f08ff3-a798-474e-9c0d-32232a35274e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.088838 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.120784 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.190494 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd68f\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-kube-api-access-xd68f\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.190559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22f08ff3-a798-474e-9c0d-32232a35274e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.190618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22f08ff3-a798-474e-9c0d-32232a35274e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.190676 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22f08ff3-a798-474e-9c0d-32232a35274e-registry-certificates\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.190741 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f08ff3-a798-474e-9c0d-32232a35274e-trusted-ca\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.190770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-bound-sa-token\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.190798 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-registry-tls\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.191356 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22f08ff3-a798-474e-9c0d-32232a35274e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.192447 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22f08ff3-a798-474e-9c0d-32232a35274e-registry-certificates\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.192618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f08ff3-a798-474e-9c0d-32232a35274e-trusted-ca\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.197566 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22f08ff3-a798-474e-9c0d-32232a35274e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.197600 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-registry-tls\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.209881 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-bound-sa-token\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.210775 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd68f\" (UniqueName: \"kubernetes.io/projected/22f08ff3-a798-474e-9c0d-32232a35274e-kube-api-access-xd68f\") pod \"image-registry-66df7c8f76-gjxcw\" (UID: \"22f08ff3-a798-474e-9c0d-32232a35274e\") " pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.275649 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.458028 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gjxcw"] Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.915382 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" event={"ID":"22f08ff3-a798-474e-9c0d-32232a35274e","Type":"ContainerStarted","Data":"4a72c6d195b522b74c7b91a2762be6e2177f2b2dd31ea18df7cde7a2e6b32b7f"} Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.915676 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.915693 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" event={"ID":"22f08ff3-a798-474e-9c0d-32232a35274e","Type":"ContainerStarted","Data":"6e8dbf81f6e7d0d674689dbc3a304e660fee02242a3d151b78af3d846b0a960a"} Jan 31 06:54:05 crc kubenswrapper[4687]: I0131 06:54:05.936556 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" podStartSLOduration=1.936528663 podStartE2EDuration="1.936528663s" podCreationTimestamp="2026-01-31 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:54:05.935288369 +0000 UTC m=+672.212547944" watchObservedRunningTime="2026-01-31 06:54:05.936528663 +0000 UTC m=+672.213788268" Jan 31 06:54:25 crc kubenswrapper[4687]: I0131 06:54:25.282684 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gjxcw" Jan 31 06:54:25 crc kubenswrapper[4687]: I0131 06:54:25.375170 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zm4ws"] Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.411322 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" podUID="8e49c821-a661-46f0-bbce-7cc8366fee3f" containerName="registry" containerID="cri-o://e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b" gracePeriod=30 Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.727602 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.925273 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-trusted-ca\") pod \"8e49c821-a661-46f0-bbce-7cc8366fee3f\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.925367 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e49c821-a661-46f0-bbce-7cc8366fee3f-installation-pull-secrets\") pod \"8e49c821-a661-46f0-bbce-7cc8366fee3f\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.925490 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e49c821-a661-46f0-bbce-7cc8366fee3f-ca-trust-extracted\") pod \"8e49c821-a661-46f0-bbce-7cc8366fee3f\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.925596 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-tls\") pod \"8e49c821-a661-46f0-bbce-7cc8366fee3f\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.925655 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crws\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-kube-api-access-9crws\") pod \"8e49c821-a661-46f0-bbce-7cc8366fee3f\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.925711 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-certificates\") pod \"8e49c821-a661-46f0-bbce-7cc8366fee3f\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.925752 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-bound-sa-token\") pod \"8e49c821-a661-46f0-bbce-7cc8366fee3f\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.926004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8e49c821-a661-46f0-bbce-7cc8366fee3f\" (UID: \"8e49c821-a661-46f0-bbce-7cc8366fee3f\") " Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.927150 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8e49c821-a661-46f0-bbce-7cc8366fee3f" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.927489 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8e49c821-a661-46f0-bbce-7cc8366fee3f" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.933693 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-kube-api-access-9crws" (OuterVolumeSpecName: "kube-api-access-9crws") pod "8e49c821-a661-46f0-bbce-7cc8366fee3f" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f"). InnerVolumeSpecName "kube-api-access-9crws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.936817 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8e49c821-a661-46f0-bbce-7cc8366fee3f" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.936989 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8e49c821-a661-46f0-bbce-7cc8366fee3f" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.937168 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e49c821-a661-46f0-bbce-7cc8366fee3f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8e49c821-a661-46f0-bbce-7cc8366fee3f" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.944098 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8e49c821-a661-46f0-bbce-7cc8366fee3f" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 06:54:50 crc kubenswrapper[4687]: I0131 06:54:50.959816 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e49c821-a661-46f0-bbce-7cc8366fee3f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8e49c821-a661-46f0-bbce-7cc8366fee3f" (UID: "8e49c821-a661-46f0-bbce-7cc8366fee3f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.027296 4687 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.027350 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crws\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-kube-api-access-9crws\") on node \"crc\" DevicePath \"\"" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.027369 4687 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.027387 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e49c821-a661-46f0-bbce-7cc8366fee3f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.027402 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e49c821-a661-46f0-bbce-7cc8366fee3f-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.027442 4687 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e49c821-a661-46f0-bbce-7cc8366fee3f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.027457 4687 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e49c821-a661-46f0-bbce-7cc8366fee3f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.176820 4687 generic.go:334] "Generic (PLEG): container finished" podID="8e49c821-a661-46f0-bbce-7cc8366fee3f" containerID="e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b" exitCode=0 Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.176868 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" event={"ID":"8e49c821-a661-46f0-bbce-7cc8366fee3f","Type":"ContainerDied","Data":"e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b"} Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.176880 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.176899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zm4ws" event={"ID":"8e49c821-a661-46f0-bbce-7cc8366fee3f","Type":"ContainerDied","Data":"63d9c4880212e25f8442ea5b30c1cbd7fc1f9f91d0d8ab48764a50ec5c48d018"} Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.176918 4687 scope.go:117] "RemoveContainer" containerID="e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.198631 4687 scope.go:117] "RemoveContainer" containerID="e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b" Jan 31 06:54:51 crc kubenswrapper[4687]: E0131 06:54:51.202072 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b\": container with ID starting with e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b not found: ID does not exist" containerID="e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.202133 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b"} err="failed to get container status \"e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b\": rpc error: code = NotFound desc = could not find container \"e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b\": container with ID starting with e881947c46d59ea96bf3953c496423ab811050b77857fb188392e3b7255d906b not found: ID does not exist" Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.205423 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zm4ws"] Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.211979 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zm4ws"] Jan 31 06:54:51 crc kubenswrapper[4687]: I0131 06:54:51.616881 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e49c821-a661-46f0-bbce-7cc8366fee3f" path="/var/lib/kubelet/pods/8e49c821-a661-46f0-bbce-7cc8366fee3f/volumes" Jan 31 06:54:58 crc kubenswrapper[4687]: I0131 06:54:58.684395 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:54:58 crc kubenswrapper[4687]: I0131 06:54:58.685186 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:55:28 crc kubenswrapper[4687]: I0131 06:55:28.684267 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:55:28 crc kubenswrapper[4687]: I0131 06:55:28.685683 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:55:44 crc kubenswrapper[4687]: I0131 06:55:44.005186 4687 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 06:55:58 crc kubenswrapper[4687]: I0131 06:55:58.684000 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:55:58 crc kubenswrapper[4687]: I0131 06:55:58.684615 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:55:58 crc kubenswrapper[4687]: I0131 06:55:58.684672 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:55:58 crc kubenswrapper[4687]: I0131 06:55:58.685240 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bff6ab00d50f16002cddbf3c92b59320770b0e2b023430f0f4635f090395c87e"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:55:58 crc kubenswrapper[4687]: I0131 06:55:58.685326 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://bff6ab00d50f16002cddbf3c92b59320770b0e2b023430f0f4635f090395c87e" gracePeriod=600 Jan 31 06:55:59 crc kubenswrapper[4687]: I0131 06:55:59.566155 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="bff6ab00d50f16002cddbf3c92b59320770b0e2b023430f0f4635f090395c87e" exitCode=0 Jan 31 06:55:59 crc kubenswrapper[4687]: I0131 06:55:59.566245 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"bff6ab00d50f16002cddbf3c92b59320770b0e2b023430f0f4635f090395c87e"} Jan 31 06:55:59 crc kubenswrapper[4687]: I0131 06:55:59.566543 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"0cd4248235582b525083ab077dd16b2a2243217ecf8d962c50ecbf6042075994"} Jan 31 06:55:59 crc kubenswrapper[4687]: I0131 06:55:59.566566 4687 scope.go:117] "RemoveContainer" containerID="f5db07448c568d30be8d0035977d79c95df6569fda6354ccd5bf27d59ac84ac4" Jan 31 06:58:28 crc kubenswrapper[4687]: I0131 06:58:28.684305 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:58:28 crc kubenswrapper[4687]: I0131 06:58:28.684883 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:58:58 crc kubenswrapper[4687]: I0131 06:58:58.683959 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:58:58 crc kubenswrapper[4687]: I0131 06:58:58.684556 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:59:28 crc kubenswrapper[4687]: I0131 06:59:28.684976 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:59:28 crc kubenswrapper[4687]: I0131 06:59:28.685589 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:59:28 crc kubenswrapper[4687]: I0131 06:59:28.685638 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 06:59:28 crc kubenswrapper[4687]: I0131 06:59:28.686209 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cd4248235582b525083ab077dd16b2a2243217ecf8d962c50ecbf6042075994"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:59:28 crc kubenswrapper[4687]: I0131 06:59:28.686265 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://0cd4248235582b525083ab077dd16b2a2243217ecf8d962c50ecbf6042075994" gracePeriod=600 Jan 31 06:59:29 crc kubenswrapper[4687]: I0131 06:59:29.708584 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="0cd4248235582b525083ab077dd16b2a2243217ecf8d962c50ecbf6042075994" exitCode=0 Jan 31 06:59:29 crc kubenswrapper[4687]: I0131 06:59:29.708643 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"0cd4248235582b525083ab077dd16b2a2243217ecf8d962c50ecbf6042075994"} Jan 31 06:59:29 crc kubenswrapper[4687]: I0131 06:59:29.709456 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"2870678d8ef3b4ce66abc3a889acd9cf6e04c0f95a1291bebaab2b0448491609"} Jan 31 06:59:29 crc kubenswrapper[4687]: I0131 06:59:29.709505 4687 scope.go:117] "RemoveContainer" containerID="bff6ab00d50f16002cddbf3c92b59320770b0e2b023430f0f4635f090395c87e" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.549167 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmn8"] Jan 31 06:59:57 crc kubenswrapper[4687]: E0131 06:59:57.550080 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e49c821-a661-46f0-bbce-7cc8366fee3f" containerName="registry" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.550101 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e49c821-a661-46f0-bbce-7cc8366fee3f" containerName="registry" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.550372 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e49c821-a661-46f0-bbce-7cc8366fee3f" containerName="registry" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.551472 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.559609 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmn8"] Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.675795 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-utilities\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.675886 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frxpl\" (UniqueName: \"kubernetes.io/projected/b257b22f-9c2f-4138-ba10-5b93fa36baf8-kube-api-access-frxpl\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.675945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-catalog-content\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.776696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frxpl\" (UniqueName: \"kubernetes.io/projected/b257b22f-9c2f-4138-ba10-5b93fa36baf8-kube-api-access-frxpl\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.776875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-catalog-content\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.776968 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-utilities\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.777554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-catalog-content\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.777810 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-utilities\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.799140 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frxpl\" (UniqueName: \"kubernetes.io/projected/b257b22f-9c2f-4138-ba10-5b93fa36baf8-kube-api-access-frxpl\") pod \"redhat-marketplace-rtmn8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:57 crc kubenswrapper[4687]: I0131 06:59:57.874085 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 06:59:58 crc kubenswrapper[4687]: I0131 06:59:58.060259 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmn8"] Jan 31 06:59:58 crc kubenswrapper[4687]: I0131 06:59:58.852884 4687 generic.go:334] "Generic (PLEG): container finished" podID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerID="b407e989eb276fbf8fae861bc16d4e38db39a6e3a410ea78c829aa7f16c2245d" exitCode=0 Jan 31 06:59:58 crc kubenswrapper[4687]: I0131 06:59:58.853176 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmn8" event={"ID":"b257b22f-9c2f-4138-ba10-5b93fa36baf8","Type":"ContainerDied","Data":"b407e989eb276fbf8fae861bc16d4e38db39a6e3a410ea78c829aa7f16c2245d"} Jan 31 06:59:58 crc kubenswrapper[4687]: I0131 06:59:58.853212 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmn8" event={"ID":"b257b22f-9c2f-4138-ba10-5b93fa36baf8","Type":"ContainerStarted","Data":"2838ea5ff200b7995042a1dbb78e31b2a4ec4dd3c55ff3c27d6e1855956d560f"} Jan 31 06:59:58 crc kubenswrapper[4687]: I0131 06:59:58.854726 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.155394 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7"] Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.156120 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.158627 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.161815 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.167807 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7"] Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.308130 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnszb\" (UniqueName: \"kubernetes.io/projected/c922043d-b4f6-4291-9a54-c8eb22b00f8d-kube-api-access-cnszb\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.308188 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c922043d-b4f6-4291-9a54-c8eb22b00f8d-secret-volume\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.308278 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c922043d-b4f6-4291-9a54-c8eb22b00f8d-config-volume\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.409818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c922043d-b4f6-4291-9a54-c8eb22b00f8d-config-volume\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.410149 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnszb\" (UniqueName: \"kubernetes.io/projected/c922043d-b4f6-4291-9a54-c8eb22b00f8d-kube-api-access-cnszb\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.410179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c922043d-b4f6-4291-9a54-c8eb22b00f8d-secret-volume\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.410718 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c922043d-b4f6-4291-9a54-c8eb22b00f8d-config-volume\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.417612 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c922043d-b4f6-4291-9a54-c8eb22b00f8d-secret-volume\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.428515 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnszb\" (UniqueName: \"kubernetes.io/projected/c922043d-b4f6-4291-9a54-c8eb22b00f8d-kube-api-access-cnszb\") pod \"collect-profiles-29497380-cwnm7\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.472386 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:00 crc kubenswrapper[4687]: I0131 07:00:00.860679 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7"] Jan 31 07:00:01 crc kubenswrapper[4687]: I0131 07:00:01.875957 4687 generic.go:334] "Generic (PLEG): container finished" podID="c922043d-b4f6-4291-9a54-c8eb22b00f8d" containerID="5713f52b1736bbb02eb321d5fdf4281c12c47cccd85504ee1ba1644fbb059675" exitCode=0 Jan 31 07:00:01 crc kubenswrapper[4687]: I0131 07:00:01.876026 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" event={"ID":"c922043d-b4f6-4291-9a54-c8eb22b00f8d","Type":"ContainerDied","Data":"5713f52b1736bbb02eb321d5fdf4281c12c47cccd85504ee1ba1644fbb059675"} Jan 31 07:00:01 crc kubenswrapper[4687]: I0131 07:00:01.876218 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" event={"ID":"c922043d-b4f6-4291-9a54-c8eb22b00f8d","Type":"ContainerStarted","Data":"65f118294efc7884c0e8068e2109a5f0577393a48f448d9de748953a70e50cfb"} Jan 31 07:00:01 crc kubenswrapper[4687]: I0131 07:00:01.878918 4687 generic.go:334] "Generic (PLEG): container finished" podID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerID="831449559cc17040d18bc47380a8cb26c7ef97a75eb1773fc2a63a66125acaf7" exitCode=0 Jan 31 07:00:01 crc kubenswrapper[4687]: I0131 07:00:01.878976 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmn8" event={"ID":"b257b22f-9c2f-4138-ba10-5b93fa36baf8","Type":"ContainerDied","Data":"831449559cc17040d18bc47380a8cb26c7ef97a75eb1773fc2a63a66125acaf7"} Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.127162 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.249359 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnszb\" (UniqueName: \"kubernetes.io/projected/c922043d-b4f6-4291-9a54-c8eb22b00f8d-kube-api-access-cnszb\") pod \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.249432 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c922043d-b4f6-4291-9a54-c8eb22b00f8d-config-volume\") pod \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.249571 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c922043d-b4f6-4291-9a54-c8eb22b00f8d-secret-volume\") pod \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\" (UID: \"c922043d-b4f6-4291-9a54-c8eb22b00f8d\") " Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.250063 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c922043d-b4f6-4291-9a54-c8eb22b00f8d-config-volume" (OuterVolumeSpecName: "config-volume") pod "c922043d-b4f6-4291-9a54-c8eb22b00f8d" (UID: "c922043d-b4f6-4291-9a54-c8eb22b00f8d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.254957 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c922043d-b4f6-4291-9a54-c8eb22b00f8d-kube-api-access-cnszb" (OuterVolumeSpecName: "kube-api-access-cnszb") pod "c922043d-b4f6-4291-9a54-c8eb22b00f8d" (UID: "c922043d-b4f6-4291-9a54-c8eb22b00f8d"). InnerVolumeSpecName "kube-api-access-cnszb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.255332 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c922043d-b4f6-4291-9a54-c8eb22b00f8d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c922043d-b4f6-4291-9a54-c8eb22b00f8d" (UID: "c922043d-b4f6-4291-9a54-c8eb22b00f8d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.351687 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnszb\" (UniqueName: \"kubernetes.io/projected/c922043d-b4f6-4291-9a54-c8eb22b00f8d-kube-api-access-cnszb\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.351725 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c922043d-b4f6-4291-9a54-c8eb22b00f8d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.351739 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c922043d-b4f6-4291-9a54-c8eb22b00f8d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.891689 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmn8" event={"ID":"b257b22f-9c2f-4138-ba10-5b93fa36baf8","Type":"ContainerStarted","Data":"ed7197029267da74212df66269585e3e693cff3034185bc992f02b144c82c8a8"} Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.893063 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" event={"ID":"c922043d-b4f6-4291-9a54-c8eb22b00f8d","Type":"ContainerDied","Data":"65f118294efc7884c0e8068e2109a5f0577393a48f448d9de748953a70e50cfb"} Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.893093 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65f118294efc7884c0e8068e2109a5f0577393a48f448d9de748953a70e50cfb" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.893094 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497380-cwnm7" Jan 31 07:00:03 crc kubenswrapper[4687]: I0131 07:00:03.910356 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rtmn8" podStartSLOduration=2.546074505 podStartE2EDuration="6.910340855s" podCreationTimestamp="2026-01-31 06:59:57 +0000 UTC" firstStartedPulling="2026-01-31 06:59:58.854541262 +0000 UTC m=+1025.131800837" lastFinishedPulling="2026-01-31 07:00:03.218807612 +0000 UTC m=+1029.496067187" observedRunningTime="2026-01-31 07:00:03.908083333 +0000 UTC m=+1030.185342908" watchObservedRunningTime="2026-01-31 07:00:03.910340855 +0000 UTC m=+1030.187600430" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.617767 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tnn4l"] Jan 31 07:00:05 crc kubenswrapper[4687]: E0131 07:00:05.617981 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c922043d-b4f6-4291-9a54-c8eb22b00f8d" containerName="collect-profiles" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.617993 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c922043d-b4f6-4291-9a54-c8eb22b00f8d" containerName="collect-profiles" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.618098 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c922043d-b4f6-4291-9a54-c8eb22b00f8d" containerName="collect-profiles" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.621494 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.645160 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tnn4l"] Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.783756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rzv\" (UniqueName: \"kubernetes.io/projected/e02a597f-422d-4b98-b829-77b6e4d72318-kube-api-access-h6rzv\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.783821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-utilities\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.783856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-catalog-content\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.885175 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-utilities\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.885224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-catalog-content\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.885284 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rzv\" (UniqueName: \"kubernetes.io/projected/e02a597f-422d-4b98-b829-77b6e4d72318-kube-api-access-h6rzv\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.885962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-utilities\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.886202 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-catalog-content\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.913237 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rzv\" (UniqueName: \"kubernetes.io/projected/e02a597f-422d-4b98-b829-77b6e4d72318-kube-api-access-h6rzv\") pod \"community-operators-tnn4l\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:05 crc kubenswrapper[4687]: I0131 07:00:05.941840 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:06 crc kubenswrapper[4687]: I0131 07:00:06.456979 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tnn4l"] Jan 31 07:00:06 crc kubenswrapper[4687]: W0131 07:00:06.463594 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode02a597f_422d_4b98_b829_77b6e4d72318.slice/crio-1cb133bc630ac9446ae41fa5d4655eb8d5d186e03c8e900fc942042cbfad5493 WatchSource:0}: Error finding container 1cb133bc630ac9446ae41fa5d4655eb8d5d186e03c8e900fc942042cbfad5493: Status 404 returned error can't find the container with id 1cb133bc630ac9446ae41fa5d4655eb8d5d186e03c8e900fc942042cbfad5493 Jan 31 07:00:06 crc kubenswrapper[4687]: I0131 07:00:06.908395 4687 generic.go:334] "Generic (PLEG): container finished" podID="e02a597f-422d-4b98-b829-77b6e4d72318" containerID="885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b" exitCode=0 Jan 31 07:00:06 crc kubenswrapper[4687]: I0131 07:00:06.908467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnn4l" event={"ID":"e02a597f-422d-4b98-b829-77b6e4d72318","Type":"ContainerDied","Data":"885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b"} Jan 31 07:00:06 crc kubenswrapper[4687]: I0131 07:00:06.908503 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnn4l" event={"ID":"e02a597f-422d-4b98-b829-77b6e4d72318","Type":"ContainerStarted","Data":"1cb133bc630ac9446ae41fa5d4655eb8d5d186e03c8e900fc942042cbfad5493"} Jan 31 07:00:07 crc kubenswrapper[4687]: I0131 07:00:07.874951 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 07:00:07 crc kubenswrapper[4687]: I0131 07:00:07.875210 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 07:00:07 crc kubenswrapper[4687]: I0131 07:00:07.916047 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 07:00:09 crc kubenswrapper[4687]: I0131 07:00:09.924284 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnn4l" event={"ID":"e02a597f-422d-4b98-b829-77b6e4d72318","Type":"ContainerStarted","Data":"bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201"} Jan 31 07:00:10 crc kubenswrapper[4687]: I0131 07:00:10.933742 4687 generic.go:334] "Generic (PLEG): container finished" podID="e02a597f-422d-4b98-b829-77b6e4d72318" containerID="bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201" exitCode=0 Jan 31 07:00:10 crc kubenswrapper[4687]: I0131 07:00:10.933799 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnn4l" event={"ID":"e02a597f-422d-4b98-b829-77b6e4d72318","Type":"ContainerDied","Data":"bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201"} Jan 31 07:00:13 crc kubenswrapper[4687]: I0131 07:00:13.952837 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnn4l" event={"ID":"e02a597f-422d-4b98-b829-77b6e4d72318","Type":"ContainerStarted","Data":"6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a"} Jan 31 07:00:13 crc kubenswrapper[4687]: I0131 07:00:13.973715 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tnn4l" podStartSLOduration=2.865631497 podStartE2EDuration="8.973687471s" podCreationTimestamp="2026-01-31 07:00:05 +0000 UTC" firstStartedPulling="2026-01-31 07:00:06.909721266 +0000 UTC m=+1033.186980841" lastFinishedPulling="2026-01-31 07:00:13.01777724 +0000 UTC m=+1039.295036815" observedRunningTime="2026-01-31 07:00:13.971783338 +0000 UTC m=+1040.249042933" watchObservedRunningTime="2026-01-31 07:00:13.973687471 +0000 UTC m=+1040.250947046" Jan 31 07:00:15 crc kubenswrapper[4687]: I0131 07:00:15.942055 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:15 crc kubenswrapper[4687]: I0131 07:00:15.942462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:15 crc kubenswrapper[4687]: I0131 07:00:15.986673 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.875959 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zvpgn"] Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.877164 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="nbdb" containerID="cri-o://9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2" gracePeriod=30 Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.877169 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1" gracePeriod=30 Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.877186 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="northd" containerID="cri-o://d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1" gracePeriod=30 Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.877354 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="sbdb" containerID="cri-o://fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d" gracePeriod=30 Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.877479 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kube-rbac-proxy-node" containerID="cri-o://5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758" gracePeriod=30 Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.877498 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovn-acl-logging" containerID="cri-o://cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad" gracePeriod=30 Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.877124 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovn-controller" containerID="cri-o://a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443" gracePeriod=30 Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.925628 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" containerID="cri-o://f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" gracePeriod=30 Jan 31 07:00:17 crc kubenswrapper[4687]: I0131 07:00:17.934997 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.000246 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/2.log" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.000672 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/1.log" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.000703 4687 generic.go:334] "Generic (PLEG): container finished" podID="96c21054-65ed-4db4-969f-bbb10f612772" containerID="e31d388087fd196fdceaf3057d03a85e5ee6d2d5b7b4e69fde93604b3a82d632" exitCode=2 Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.000732 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77mzd" event={"ID":"96c21054-65ed-4db4-969f-bbb10f612772","Type":"ContainerDied","Data":"e31d388087fd196fdceaf3057d03a85e5ee6d2d5b7b4e69fde93604b3a82d632"} Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.000780 4687 scope.go:117] "RemoveContainer" containerID="f976ffe4cdfe5f8bfeef0529c144f8511fd1e7562a2202b11fd602c815560562" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.001319 4687 scope.go:117] "RemoveContainer" containerID="e31d388087fd196fdceaf3057d03a85e5ee6d2d5b7b4e69fde93604b3a82d632" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.028616 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmn8"] Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.028828 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rtmn8" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerName="registry-server" containerID="cri-o://ed7197029267da74212df66269585e3e693cff3034185bc992f02b144c82c8a8" gracePeriod=2 Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.241385 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/3.log" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.247997 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovn-acl-logging/0.log" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.248506 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.248972 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovn-controller/0.log" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.249383 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.317976 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m6dfr"] Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318167 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kube-rbac-proxy-node" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318178 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kube-rbac-proxy-node" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318201 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerName="registry-server" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318207 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerName="registry-server" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318220 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerName="extract-utilities" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318226 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerName="extract-utilities" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318234 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318239 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318246 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318252 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318258 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerName="extract-content" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318270 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerName="extract-content" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318278 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318284 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318292 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318298 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318308 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318313 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318319 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovn-acl-logging" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318325 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovn-acl-logging" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318334 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="sbdb" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318340 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="sbdb" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318349 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="nbdb" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318355 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="nbdb" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318362 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kubecfg-setup" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318368 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kubecfg-setup" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318377 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="northd" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318383 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="northd" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318390 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovn-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318395 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovn-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318516 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="northd" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318529 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovn-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318536 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318556 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318562 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovn-acl-logging" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318570 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318576 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="nbdb" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318584 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318590 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318601 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="kube-rbac-proxy-node" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318613 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="sbdb" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318623 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerName="registry-server" Jan 31 07:00:18 crc kubenswrapper[4687]: E0131 07:00:18.318744 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318753 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.318846 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerName="ovnkube-controller" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.324578 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331609 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9ts2\" (UniqueName: \"kubernetes.io/projected/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-kube-api-access-w9ts2\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331648 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-netd\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331672 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-systemd-units\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331687 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-ovn\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331714 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-ovn-kubernetes\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331730 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frxpl\" (UniqueName: \"kubernetes.io/projected/b257b22f-9c2f-4138-ba10-5b93fa36baf8-kube-api-access-frxpl\") pod \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331752 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-catalog-content\") pod \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331784 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-config\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331789 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331802 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-bin\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331837 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331864 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331873 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-utilities\") pod \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\" (UID: \"b257b22f-9c2f-4138-ba10-5b93fa36baf8\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331884 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-kubelet\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331908 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-slash\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331938 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovn-node-metrics-cert\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.331970 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-etc-openvswitch\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332000 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-node-log\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332016 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-script-lib\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332038 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-log-socket\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332091 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-systemd\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332105 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-netns\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332119 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-env-overrides\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332133 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-var-lib-openvswitch\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332145 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-openvswitch\") pod \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\" (UID: \"55484aa7-5d82-4f2e-ab22-2ceae9c90c96\") " Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332480 4687 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332507 4687 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332516 4687 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332525 4687 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.332570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.333011 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-node-log" (OuterVolumeSpecName: "node-log") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.333050 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.333221 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-slash" (OuterVolumeSpecName: "host-slash") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.333292 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.333321 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.333754 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.333765 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.333815 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.334144 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-utilities" (OuterVolumeSpecName: "utilities") pod "b257b22f-9c2f-4138-ba10-5b93fa36baf8" (UID: "b257b22f-9c2f-4138-ba10-5b93fa36baf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.334192 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-log-socket" (OuterVolumeSpecName: "log-socket") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.334231 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.334324 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.334866 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.339337 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-kube-api-access-w9ts2" (OuterVolumeSpecName: "kube-api-access-w9ts2") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "kube-api-access-w9ts2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.339403 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.342128 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b257b22f-9c2f-4138-ba10-5b93fa36baf8-kube-api-access-frxpl" (OuterVolumeSpecName: "kube-api-access-frxpl") pod "b257b22f-9c2f-4138-ba10-5b93fa36baf8" (UID: "b257b22f-9c2f-4138-ba10-5b93fa36baf8"). InnerVolumeSpecName "kube-api-access-frxpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.347724 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "55484aa7-5d82-4f2e-ab22-2ceae9c90c96" (UID: "55484aa7-5d82-4f2e-ab22-2ceae9c90c96"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.359493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b257b22f-9c2f-4138-ba10-5b93fa36baf8" (UID: "b257b22f-9c2f-4138-ba10-5b93fa36baf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.433513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-etc-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.433801 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.433880 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-ovn\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.433981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434052 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-kubelet\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434123 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-cni-netd\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434205 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-cni-bin\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434287 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-var-lib-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434380 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovnkube-script-lib\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-run-netns\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434566 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-systemd\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434631 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-slash\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-run-ovn-kubernetes\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovn-node-metrics-cert\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434865 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-node-log\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434922 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqhx5\" (UniqueName: \"kubernetes.io/projected/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-kube-api-access-zqhx5\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.434983 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovnkube-config\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435263 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-env-overrides\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435347 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-log-socket\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435433 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-systemd-units\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435549 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435607 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435658 4687 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435711 4687 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435761 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435811 4687 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435864 4687 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435916 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.435965 4687 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436019 4687 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436068 4687 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436120 4687 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436172 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436219 4687 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436270 4687 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436324 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9ts2\" (UniqueName: \"kubernetes.io/projected/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-kube-api-access-w9ts2\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436376 4687 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55484aa7-5d82-4f2e-ab22-2ceae9c90c96-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436443 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frxpl\" (UniqueName: \"kubernetes.io/projected/b257b22f-9c2f-4138-ba10-5b93fa36baf8-kube-api-access-frxpl\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.436495 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b257b22f-9c2f-4138-ba10-5b93fa36baf8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537707 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-node-log\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537754 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqhx5\" (UniqueName: \"kubernetes.io/projected/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-kube-api-access-zqhx5\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovnkube-config\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537800 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-log-socket\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537817 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-env-overrides\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537825 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-node-log\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537871 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-systemd-units\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537834 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-systemd-units\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537941 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-log-socket\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537971 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-etc-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.537997 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-ovn\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538041 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-etc-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538080 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-kubelet\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538110 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538119 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-ovn\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538120 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-cni-netd\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538140 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-cni-netd\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538139 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-kubelet\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538227 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-cni-bin\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538265 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-cni-bin\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538271 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-var-lib-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-var-lib-openvswitch\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538317 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovnkube-script-lib\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538351 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-run-netns\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-systemd\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-slash\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538446 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-run-ovn-kubernetes\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538463 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovn-node-metrics-cert\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-env-overrides\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538538 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-run-ovn-kubernetes\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538556 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-run-systemd\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538608 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-run-netns\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-host-slash\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.538653 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovnkube-config\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.539660 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovnkube-script-lib\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.542362 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-ovn-node-metrics-cert\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.554522 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqhx5\" (UniqueName: \"kubernetes.io/projected/b4cfb6eb-c1dc-46a6-8579-482c57b9f422-kube-api-access-zqhx5\") pod \"ovnkube-node-m6dfr\" (UID: \"b4cfb6eb-c1dc-46a6-8579-482c57b9f422\") " pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:18 crc kubenswrapper[4687]: I0131 07:00:18.640009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.007378 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovnkube-controller/3.log" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.011287 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovn-acl-logging/0.log" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012264 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zvpgn_55484aa7-5d82-4f2e-ab22-2ceae9c90c96/ovn-controller/0.log" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012785 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" exitCode=0 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012822 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d" exitCode=0 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012831 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2" exitCode=0 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012841 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1" exitCode=0 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012850 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1" exitCode=0 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012857 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758" exitCode=0 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012864 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad" exitCode=143 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.012872 4687 generic.go:334] "Generic (PLEG): container finished" podID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" containerID="a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443" exitCode=143 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013003 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013360 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013382 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013436 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013449 4687 scope.go:117] "RemoveContainer" containerID="f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013464 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013482 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013491 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013498 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013506 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013514 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013522 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013529 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013536 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013571 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013591 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013602 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013612 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013622 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013632 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013642 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013652 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013661 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013670 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013706 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013718 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013728 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013738 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013842 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013858 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013896 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013907 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013916 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013925 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013941 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvpgn" event={"ID":"55484aa7-5d82-4f2e-ab22-2ceae9c90c96","Type":"ContainerDied","Data":"c3e382f166a625460737379a3a5a2eea8a04d3ee45bb6a7050109432c7bf2b43"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013959 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013970 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013982 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.013992 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.014002 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.014014 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.014026 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.014036 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.014045 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.014055 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018543 4687 generic.go:334] "Generic (PLEG): container finished" podID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" containerID="ed7197029267da74212df66269585e3e693cff3034185bc992f02b144c82c8a8" exitCode=0 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018548 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtmn8" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018637 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmn8" event={"ID":"b257b22f-9c2f-4138-ba10-5b93fa36baf8","Type":"ContainerDied","Data":"ed7197029267da74212df66269585e3e693cff3034185bc992f02b144c82c8a8"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018685 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed7197029267da74212df66269585e3e693cff3034185bc992f02b144c82c8a8"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018716 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"831449559cc17040d18bc47380a8cb26c7ef97a75eb1773fc2a63a66125acaf7"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018766 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b407e989eb276fbf8fae861bc16d4e38db39a6e3a410ea78c829aa7f16c2245d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018787 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmn8" event={"ID":"b257b22f-9c2f-4138-ba10-5b93fa36baf8","Type":"ContainerDied","Data":"2838ea5ff200b7995042a1dbb78e31b2a4ec4dd3c55ff3c27d6e1855956d560f"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018804 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed7197029267da74212df66269585e3e693cff3034185bc992f02b144c82c8a8"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018818 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"831449559cc17040d18bc47380a8cb26c7ef97a75eb1773fc2a63a66125acaf7"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.018830 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b407e989eb276fbf8fae861bc16d4e38db39a6e3a410ea78c829aa7f16c2245d"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.022191 4687 generic.go:334] "Generic (PLEG): container finished" podID="b4cfb6eb-c1dc-46a6-8579-482c57b9f422" containerID="6fc2d7d4740fbb65fbf4877cecbe11a5838fcfaa36599d8162f7d6a403e475fd" exitCode=0 Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.022264 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerDied","Data":"6fc2d7d4740fbb65fbf4877cecbe11a5838fcfaa36599d8162f7d6a403e475fd"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.022297 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"7050f86af24bbfd649134b28c52a2fe3fded64d8b826f8710b6eaf4a4aa4a773"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.028003 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77mzd_96c21054-65ed-4db4-969f-bbb10f612772/kube-multus/2.log" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.028184 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77mzd" event={"ID":"96c21054-65ed-4db4-969f-bbb10f612772","Type":"ContainerStarted","Data":"e55e1186e5723b3de415be361aad6be1d8e7b39be94f20920bf4a025527ef4c8"} Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.045588 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.067798 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmn8"] Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.071204 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmn8"] Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.086186 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zvpgn"] Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.090353 4687 scope.go:117] "RemoveContainer" containerID="fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.093465 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zvpgn"] Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.116918 4687 scope.go:117] "RemoveContainer" containerID="9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.130215 4687 scope.go:117] "RemoveContainer" containerID="d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.144233 4687 scope.go:117] "RemoveContainer" containerID="4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.160337 4687 scope.go:117] "RemoveContainer" containerID="5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.173301 4687 scope.go:117] "RemoveContainer" containerID="cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.210484 4687 scope.go:117] "RemoveContainer" containerID="a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.225957 4687 scope.go:117] "RemoveContainer" containerID="62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.252892 4687 scope.go:117] "RemoveContainer" containerID="f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.254627 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": container with ID starting with f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5 not found: ID does not exist" containerID="f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.254658 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} err="failed to get container status \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": rpc error: code = NotFound desc = could not find container \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": container with ID starting with f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.254680 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.254995 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": container with ID starting with 900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d not found: ID does not exist" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.255012 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} err="failed to get container status \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": rpc error: code = NotFound desc = could not find container \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": container with ID starting with 900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.255025 4687 scope.go:117] "RemoveContainer" containerID="fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.255725 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": container with ID starting with fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d not found: ID does not exist" containerID="fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.255867 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} err="failed to get container status \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": rpc error: code = NotFound desc = could not find container \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": container with ID starting with fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.255975 4687 scope.go:117] "RemoveContainer" containerID="9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.256372 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": container with ID starting with 9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2 not found: ID does not exist" containerID="9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.256466 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} err="failed to get container status \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": rpc error: code = NotFound desc = could not find container \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": container with ID starting with 9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.256561 4687 scope.go:117] "RemoveContainer" containerID="d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.257642 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": container with ID starting with d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1 not found: ID does not exist" containerID="d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.258102 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} err="failed to get container status \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": rpc error: code = NotFound desc = could not find container \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": container with ID starting with d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.258139 4687 scope.go:117] "RemoveContainer" containerID="4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.258576 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": container with ID starting with 4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1 not found: ID does not exist" containerID="4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.258697 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} err="failed to get container status \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": rpc error: code = NotFound desc = could not find container \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": container with ID starting with 4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.258836 4687 scope.go:117] "RemoveContainer" containerID="5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.259501 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": container with ID starting with 5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758 not found: ID does not exist" containerID="5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.259538 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} err="failed to get container status \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": rpc error: code = NotFound desc = could not find container \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": container with ID starting with 5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.259580 4687 scope.go:117] "RemoveContainer" containerID="cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.260062 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": container with ID starting with cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad not found: ID does not exist" containerID="cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.260221 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} err="failed to get container status \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": rpc error: code = NotFound desc = could not find container \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": container with ID starting with cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.260338 4687 scope.go:117] "RemoveContainer" containerID="a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.260955 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": container with ID starting with a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443 not found: ID does not exist" containerID="a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.261149 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} err="failed to get container status \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": rpc error: code = NotFound desc = could not find container \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": container with ID starting with a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.261174 4687 scope.go:117] "RemoveContainer" containerID="62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8" Jan 31 07:00:19 crc kubenswrapper[4687]: E0131 07:00:19.261700 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": container with ID starting with 62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8 not found: ID does not exist" containerID="62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.261749 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} err="failed to get container status \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": rpc error: code = NotFound desc = could not find container \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": container with ID starting with 62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.261769 4687 scope.go:117] "RemoveContainer" containerID="f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.262131 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} err="failed to get container status \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": rpc error: code = NotFound desc = could not find container \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": container with ID starting with f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.262153 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.262456 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} err="failed to get container status \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": rpc error: code = NotFound desc = could not find container \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": container with ID starting with 900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.262484 4687 scope.go:117] "RemoveContainer" containerID="fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.262854 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} err="failed to get container status \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": rpc error: code = NotFound desc = could not find container \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": container with ID starting with fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.262940 4687 scope.go:117] "RemoveContainer" containerID="9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.263197 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} err="failed to get container status \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": rpc error: code = NotFound desc = could not find container \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": container with ID starting with 9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.263221 4687 scope.go:117] "RemoveContainer" containerID="d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.263457 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} err="failed to get container status \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": rpc error: code = NotFound desc = could not find container \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": container with ID starting with d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.263483 4687 scope.go:117] "RemoveContainer" containerID="4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.263791 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} err="failed to get container status \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": rpc error: code = NotFound desc = could not find container \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": container with ID starting with 4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.263841 4687 scope.go:117] "RemoveContainer" containerID="5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.264548 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} err="failed to get container status \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": rpc error: code = NotFound desc = could not find container \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": container with ID starting with 5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.264659 4687 scope.go:117] "RemoveContainer" containerID="cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.265008 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} err="failed to get container status \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": rpc error: code = NotFound desc = could not find container \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": container with ID starting with cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.265034 4687 scope.go:117] "RemoveContainer" containerID="a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.265284 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} err="failed to get container status \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": rpc error: code = NotFound desc = could not find container \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": container with ID starting with a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.265330 4687 scope.go:117] "RemoveContainer" containerID="62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.265714 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} err="failed to get container status \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": rpc error: code = NotFound desc = could not find container \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": container with ID starting with 62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.265737 4687 scope.go:117] "RemoveContainer" containerID="f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.265976 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} err="failed to get container status \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": rpc error: code = NotFound desc = could not find container \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": container with ID starting with f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.266005 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.266243 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} err="failed to get container status \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": rpc error: code = NotFound desc = could not find container \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": container with ID starting with 900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.266262 4687 scope.go:117] "RemoveContainer" containerID="fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.266556 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} err="failed to get container status \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": rpc error: code = NotFound desc = could not find container \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": container with ID starting with fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.266581 4687 scope.go:117] "RemoveContainer" containerID="9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.266966 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} err="failed to get container status \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": rpc error: code = NotFound desc = could not find container \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": container with ID starting with 9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.267098 4687 scope.go:117] "RemoveContainer" containerID="d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.267531 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} err="failed to get container status \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": rpc error: code = NotFound desc = could not find container \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": container with ID starting with d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.267552 4687 scope.go:117] "RemoveContainer" containerID="4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.267901 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} err="failed to get container status \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": rpc error: code = NotFound desc = could not find container \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": container with ID starting with 4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.267925 4687 scope.go:117] "RemoveContainer" containerID="5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.268223 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} err="failed to get container status \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": rpc error: code = NotFound desc = could not find container \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": container with ID starting with 5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.268327 4687 scope.go:117] "RemoveContainer" containerID="cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.268652 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} err="failed to get container status \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": rpc error: code = NotFound desc = could not find container \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": container with ID starting with cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.268671 4687 scope.go:117] "RemoveContainer" containerID="a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.269034 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} err="failed to get container status \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": rpc error: code = NotFound desc = could not find container \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": container with ID starting with a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.269176 4687 scope.go:117] "RemoveContainer" containerID="62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.269568 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} err="failed to get container status \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": rpc error: code = NotFound desc = could not find container \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": container with ID starting with 62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.269659 4687 scope.go:117] "RemoveContainer" containerID="f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.270023 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} err="failed to get container status \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": rpc error: code = NotFound desc = could not find container \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": container with ID starting with f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.270068 4687 scope.go:117] "RemoveContainer" containerID="900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.270365 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d"} err="failed to get container status \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": rpc error: code = NotFound desc = could not find container \"900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d\": container with ID starting with 900496edd7f6ffbfedcf07939680f3026e71f108d2be1c2105a15e28f007fa5d not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.270487 4687 scope.go:117] "RemoveContainer" containerID="fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.270869 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d"} err="failed to get container status \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": rpc error: code = NotFound desc = could not find container \"fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d\": container with ID starting with fe01f6c15b78903b3cfa609e5f2c003480057a23e3f71ea5c8c3098d7c0af30d not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.270896 4687 scope.go:117] "RemoveContainer" containerID="9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.271254 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2"} err="failed to get container status \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": rpc error: code = NotFound desc = could not find container \"9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2\": container with ID starting with 9d66609c9138e38e3ba7bb408f2858969c47cb5c83a3c4b7a0c3012822a723e2 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.271360 4687 scope.go:117] "RemoveContainer" containerID="d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.271657 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1"} err="failed to get container status \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": rpc error: code = NotFound desc = could not find container \"d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1\": container with ID starting with d2b83f5b22e7a0a93ebe8fc61bd0c85aaab20991a0c94870faaefed98e5072a1 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.271749 4687 scope.go:117] "RemoveContainer" containerID="4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.272139 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1"} err="failed to get container status \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": rpc error: code = NotFound desc = could not find container \"4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1\": container with ID starting with 4de1e3341315ed8b59febec34248abef1ed468e2a956410642cb7c2a0e92c0d1 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.272180 4687 scope.go:117] "RemoveContainer" containerID="5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.272559 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758"} err="failed to get container status \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": rpc error: code = NotFound desc = could not find container \"5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758\": container with ID starting with 5fa1b5e99223c97082dfcc3c9ded72baa355266c38163d277619be5aca315758 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.272667 4687 scope.go:117] "RemoveContainer" containerID="cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.273637 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad"} err="failed to get container status \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": rpc error: code = NotFound desc = could not find container \"cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad\": container with ID starting with cfcf447c388b51ab402b46654989f833fb4111785804c1ab9f1f4a3d8f822aad not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.273719 4687 scope.go:117] "RemoveContainer" containerID="a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.275097 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443"} err="failed to get container status \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": rpc error: code = NotFound desc = could not find container \"a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443\": container with ID starting with a819d2145c5dbaf37e92fdfb703b38455d47c2d49d5d225abbb359aeff559443 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.275173 4687 scope.go:117] "RemoveContainer" containerID="62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.275574 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8"} err="failed to get container status \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": rpc error: code = NotFound desc = could not find container \"62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8\": container with ID starting with 62d13b91ec80ae063669f82bf871d3a581365354a321d9d79e1535b2952455d8 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.275602 4687 scope.go:117] "RemoveContainer" containerID="f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.276019 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5"} err="failed to get container status \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": rpc error: code = NotFound desc = could not find container \"f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5\": container with ID starting with f61d0c02462ba3c402fa838eeb2def14652ada70b9d71101516015814343bab5 not found: ID does not exist" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.612183 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55484aa7-5d82-4f2e-ab22-2ceae9c90c96" path="/var/lib/kubelet/pods/55484aa7-5d82-4f2e-ab22-2ceae9c90c96/volumes" Jan 31 07:00:19 crc kubenswrapper[4687]: I0131 07:00:19.614112 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b257b22f-9c2f-4138-ba10-5b93fa36baf8" path="/var/lib/kubelet/pods/b257b22f-9c2f-4138-ba10-5b93fa36baf8/volumes" Jan 31 07:00:20 crc kubenswrapper[4687]: I0131 07:00:20.038233 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"8715c717aeac07e004c94c6196198bb9c9c63f5e7a73439dc84401bf9e356260"} Jan 31 07:00:20 crc kubenswrapper[4687]: I0131 07:00:20.038548 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"381d020d9c364ff72311875aa15c7c0ab17408fe9301a14573008d57ca3beda0"} Jan 31 07:00:20 crc kubenswrapper[4687]: I0131 07:00:20.038561 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"bee4b050b4f721bfc5f80c905a86d1f97d87e6446423cc4b44c48c96aa27da17"} Jan 31 07:00:20 crc kubenswrapper[4687]: I0131 07:00:20.038570 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"9f763210c8dc21e15d733f91bd8d8137b547911bba2081b8858ad65e80dbb2cf"} Jan 31 07:00:20 crc kubenswrapper[4687]: I0131 07:00:20.038579 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"f50002243a1475e474ba6985b998fdcdf18db13129e5666d4f649bb516037380"} Jan 31 07:00:20 crc kubenswrapper[4687]: I0131 07:00:20.038586 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"fb4312b2e8fcf51497f70b6cfc78de691ae93a19dd7c6ffc868e740f04988809"} Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.188261 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m555d"] Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.190872 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.274097 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-catalog-content\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.274158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zfr\" (UniqueName: \"kubernetes.io/projected/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-kube-api-access-d5zfr\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.274184 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-utilities\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.375237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-utilities\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.375355 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-catalog-content\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.375389 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zfr\" (UniqueName: \"kubernetes.io/projected/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-kube-api-access-d5zfr\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.375780 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-utilities\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.376133 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-catalog-content\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.400331 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zfr\" (UniqueName: \"kubernetes.io/projected/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-kube-api-access-d5zfr\") pod \"certified-operators-m555d\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: I0131 07:00:21.512038 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: E0131 07:00:21.535563 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-m555d_openshift-marketplace_0cd756bb-c628-4667-bfc0-8eaa2fe6b856_0(3a3e5052268a12fcd078559024b0756b10e62536c4a80ab6c02ec62bc6a40ab7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:00:21 crc kubenswrapper[4687]: E0131 07:00:21.535630 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-m555d_openshift-marketplace_0cd756bb-c628-4667-bfc0-8eaa2fe6b856_0(3a3e5052268a12fcd078559024b0756b10e62536c4a80ab6c02ec62bc6a40ab7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: E0131 07:00:21.535653 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-m555d_openshift-marketplace_0cd756bb-c628-4667-bfc0-8eaa2fe6b856_0(3a3e5052268a12fcd078559024b0756b10e62536c4a80ab6c02ec62bc6a40ab7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:21 crc kubenswrapper[4687]: E0131 07:00:21.535696 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-m555d_openshift-marketplace(0cd756bb-c628-4667-bfc0-8eaa2fe6b856)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-m555d_openshift-marketplace(0cd756bb-c628-4667-bfc0-8eaa2fe6b856)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-m555d_openshift-marketplace_0cd756bb-c628-4667-bfc0-8eaa2fe6b856_0(3a3e5052268a12fcd078559024b0756b10e62536c4a80ab6c02ec62bc6a40ab7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-m555d" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" Jan 31 07:00:23 crc kubenswrapper[4687]: I0131 07:00:23.058734 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"00e9ff2774cdf9191cd787f9413e8346ef0ec756d06bfd378035dc1d2e4d963e"} Jan 31 07:00:25 crc kubenswrapper[4687]: I0131 07:00:25.089990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" event={"ID":"b4cfb6eb-c1dc-46a6-8579-482c57b9f422","Type":"ContainerStarted","Data":"b89abbda5239fbf9118fa93b2e0788f39efc356961258220d6ce1fb22db757bc"} Jan 31 07:00:25 crc kubenswrapper[4687]: I0131 07:00:25.119171 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" podStartSLOduration=7.119095282 podStartE2EDuration="7.119095282s" podCreationTimestamp="2026-01-31 07:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:00:25.113623482 +0000 UTC m=+1051.390883067" watchObservedRunningTime="2026-01-31 07:00:25.119095282 +0000 UTC m=+1051.396354887" Jan 31 07:00:25 crc kubenswrapper[4687]: I0131 07:00:25.990558 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.030220 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tnn4l"] Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.095620 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tnn4l" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" containerName="registry-server" containerID="cri-o://6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a" gracePeriod=2 Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.096118 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.096147 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.096159 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.122091 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.124934 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.630791 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m555d"] Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.630976 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.631938 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:26 crc kubenswrapper[4687]: E0131 07:00:26.661811 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-m555d_openshift-marketplace_0cd756bb-c628-4667-bfc0-8eaa2fe6b856_0(66f8cfda1b1bafd558f08b4be49d6297d86ff7df93cff259ca63d25aa2cc890d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:00:26 crc kubenswrapper[4687]: E0131 07:00:26.662159 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-m555d_openshift-marketplace_0cd756bb-c628-4667-bfc0-8eaa2fe6b856_0(66f8cfda1b1bafd558f08b4be49d6297d86ff7df93cff259ca63d25aa2cc890d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:26 crc kubenswrapper[4687]: E0131 07:00:26.662181 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-m555d_openshift-marketplace_0cd756bb-c628-4667-bfc0-8eaa2fe6b856_0(66f8cfda1b1bafd558f08b4be49d6297d86ff7df93cff259ca63d25aa2cc890d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:26 crc kubenswrapper[4687]: E0131 07:00:26.662232 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-m555d_openshift-marketplace(0cd756bb-c628-4667-bfc0-8eaa2fe6b856)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-m555d_openshift-marketplace(0cd756bb-c628-4667-bfc0-8eaa2fe6b856)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-m555d_openshift-marketplace_0cd756bb-c628-4667-bfc0-8eaa2fe6b856_0(66f8cfda1b1bafd558f08b4be49d6297d86ff7df93cff259ca63d25aa2cc890d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-m555d" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.824322 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.942077 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6rzv\" (UniqueName: \"kubernetes.io/projected/e02a597f-422d-4b98-b829-77b6e4d72318-kube-api-access-h6rzv\") pod \"e02a597f-422d-4b98-b829-77b6e4d72318\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.942189 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-utilities\") pod \"e02a597f-422d-4b98-b829-77b6e4d72318\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.942221 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-catalog-content\") pod \"e02a597f-422d-4b98-b829-77b6e4d72318\" (UID: \"e02a597f-422d-4b98-b829-77b6e4d72318\") " Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.943123 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-utilities" (OuterVolumeSpecName: "utilities") pod "e02a597f-422d-4b98-b829-77b6e4d72318" (UID: "e02a597f-422d-4b98-b829-77b6e4d72318"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.947557 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02a597f-422d-4b98-b829-77b6e4d72318-kube-api-access-h6rzv" (OuterVolumeSpecName: "kube-api-access-h6rzv") pod "e02a597f-422d-4b98-b829-77b6e4d72318" (UID: "e02a597f-422d-4b98-b829-77b6e4d72318"). InnerVolumeSpecName "kube-api-access-h6rzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:00:26 crc kubenswrapper[4687]: I0131 07:00:26.991166 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e02a597f-422d-4b98-b829-77b6e4d72318" (UID: "e02a597f-422d-4b98-b829-77b6e4d72318"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.043972 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6rzv\" (UniqueName: \"kubernetes.io/projected/e02a597f-422d-4b98-b829-77b6e4d72318-kube-api-access-h6rzv\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.044002 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.044016 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02a597f-422d-4b98-b829-77b6e4d72318-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.102268 4687 generic.go:334] "Generic (PLEG): container finished" podID="e02a597f-422d-4b98-b829-77b6e4d72318" containerID="6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a" exitCode=0 Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.102295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnn4l" event={"ID":"e02a597f-422d-4b98-b829-77b6e4d72318","Type":"ContainerDied","Data":"6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a"} Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.102322 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnn4l" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.102356 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnn4l" event={"ID":"e02a597f-422d-4b98-b829-77b6e4d72318","Type":"ContainerDied","Data":"1cb133bc630ac9446ae41fa5d4655eb8d5d186e03c8e900fc942042cbfad5493"} Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.102380 4687 scope.go:117] "RemoveContainer" containerID="6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.128680 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tnn4l"] Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.130465 4687 scope.go:117] "RemoveContainer" containerID="bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.132646 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tnn4l"] Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.162399 4687 scope.go:117] "RemoveContainer" containerID="885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.175236 4687 scope.go:117] "RemoveContainer" containerID="6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a" Jan 31 07:00:27 crc kubenswrapper[4687]: E0131 07:00:27.175681 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a\": container with ID starting with 6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a not found: ID does not exist" containerID="6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.175722 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a"} err="failed to get container status \"6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a\": rpc error: code = NotFound desc = could not find container \"6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a\": container with ID starting with 6c0d9997b0409174d4e6753df59ec0fd001107b485f7f8c417b43cfabf5c535a not found: ID does not exist" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.175748 4687 scope.go:117] "RemoveContainer" containerID="bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201" Jan 31 07:00:27 crc kubenswrapper[4687]: E0131 07:00:27.176162 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201\": container with ID starting with bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201 not found: ID does not exist" containerID="bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.176186 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201"} err="failed to get container status \"bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201\": rpc error: code = NotFound desc = could not find container \"bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201\": container with ID starting with bd5d3f96442b8c94e03d59731332fd074156c313734841824448469cf9d3c201 not found: ID does not exist" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.176200 4687 scope.go:117] "RemoveContainer" containerID="885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b" Jan 31 07:00:27 crc kubenswrapper[4687]: E0131 07:00:27.176603 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b\": container with ID starting with 885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b not found: ID does not exist" containerID="885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.176623 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b"} err="failed to get container status \"885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b\": rpc error: code = NotFound desc = could not find container \"885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b\": container with ID starting with 885e3add15a0fddb4a5c301bc7cbf12aff88c6376be0d2d975f10f7874b61e0b not found: ID does not exist" Jan 31 07:00:27 crc kubenswrapper[4687]: I0131 07:00:27.609091 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" path="/var/lib/kubelet/pods/e02a597f-422d-4b98-b829-77b6e4d72318/volumes" Jan 31 07:00:37 crc kubenswrapper[4687]: I0131 07:00:37.602537 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:37 crc kubenswrapper[4687]: I0131 07:00:37.603767 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:37 crc kubenswrapper[4687]: I0131 07:00:37.834150 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m555d"] Jan 31 07:00:38 crc kubenswrapper[4687]: I0131 07:00:38.159702 4687 generic.go:334] "Generic (PLEG): container finished" podID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerID="37f8bf3e1d517c1caabaa01c3879b8b5f46e67653af82cded4f560d4556d3ea0" exitCode=0 Jan 31 07:00:38 crc kubenswrapper[4687]: I0131 07:00:38.159794 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m555d" event={"ID":"0cd756bb-c628-4667-bfc0-8eaa2fe6b856","Type":"ContainerDied","Data":"37f8bf3e1d517c1caabaa01c3879b8b5f46e67653af82cded4f560d4556d3ea0"} Jan 31 07:00:38 crc kubenswrapper[4687]: I0131 07:00:38.160107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m555d" event={"ID":"0cd756bb-c628-4667-bfc0-8eaa2fe6b856","Type":"ContainerStarted","Data":"5b3f048fb79c4b004a8b0882e6fbd1821098f8d3ffd5a37ce99633582768a9ab"} Jan 31 07:00:39 crc kubenswrapper[4687]: I0131 07:00:39.168472 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m555d" event={"ID":"0cd756bb-c628-4667-bfc0-8eaa2fe6b856","Type":"ContainerStarted","Data":"b429759d6ce01cc3f2489323faa1bd3f146e60ff945b5cae41debbe670920239"} Jan 31 07:00:40 crc kubenswrapper[4687]: I0131 07:00:40.176136 4687 generic.go:334] "Generic (PLEG): container finished" podID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerID="b429759d6ce01cc3f2489323faa1bd3f146e60ff945b5cae41debbe670920239" exitCode=0 Jan 31 07:00:40 crc kubenswrapper[4687]: I0131 07:00:40.176248 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m555d" event={"ID":"0cd756bb-c628-4667-bfc0-8eaa2fe6b856","Type":"ContainerDied","Data":"b429759d6ce01cc3f2489323faa1bd3f146e60ff945b5cae41debbe670920239"} Jan 31 07:00:41 crc kubenswrapper[4687]: I0131 07:00:41.183394 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m555d" event={"ID":"0cd756bb-c628-4667-bfc0-8eaa2fe6b856","Type":"ContainerStarted","Data":"80363433b938133af17959e6c372a0ec347842659fafeba331f3dca58321852f"} Jan 31 07:00:41 crc kubenswrapper[4687]: I0131 07:00:41.198724 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m555d" podStartSLOduration=17.722261186 podStartE2EDuration="20.198693561s" podCreationTimestamp="2026-01-31 07:00:21 +0000 UTC" firstStartedPulling="2026-01-31 07:00:38.161430645 +0000 UTC m=+1064.438690230" lastFinishedPulling="2026-01-31 07:00:40.63786304 +0000 UTC m=+1066.915122605" observedRunningTime="2026-01-31 07:00:41.197465407 +0000 UTC m=+1067.474724972" watchObservedRunningTime="2026-01-31 07:00:41.198693561 +0000 UTC m=+1067.475953136" Jan 31 07:00:41 crc kubenswrapper[4687]: I0131 07:00:41.513059 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:41 crc kubenswrapper[4687]: I0131 07:00:41.513138 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:42 crc kubenswrapper[4687]: I0131 07:00:42.559032 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-m555d" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="registry-server" probeResult="failure" output=< Jan 31 07:00:42 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Jan 31 07:00:42 crc kubenswrapper[4687]: > Jan 31 07:00:48 crc kubenswrapper[4687]: I0131 07:00:48.670049 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m6dfr" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.891356 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz"] Jan 31 07:00:49 crc kubenswrapper[4687]: E0131 07:00:49.891665 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" containerName="extract-utilities" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.891682 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" containerName="extract-utilities" Jan 31 07:00:49 crc kubenswrapper[4687]: E0131 07:00:49.891699 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" containerName="registry-server" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.891707 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" containerName="registry-server" Jan 31 07:00:49 crc kubenswrapper[4687]: E0131 07:00:49.891722 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" containerName="extract-content" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.891730 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" containerName="extract-content" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.891849 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02a597f-422d-4b98-b829-77b6e4d72318" containerName="registry-server" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.892676 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.896350 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.906022 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz"] Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.935539 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qzj\" (UniqueName: \"kubernetes.io/projected/838dbbef-88b2-4605-9482-2628852377fa-kube-api-access-d7qzj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.935648 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:49 crc kubenswrapper[4687]: I0131 07:00:49.935691 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:50 crc kubenswrapper[4687]: I0131 07:00:50.037635 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qzj\" (UniqueName: \"kubernetes.io/projected/838dbbef-88b2-4605-9482-2628852377fa-kube-api-access-d7qzj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:50 crc kubenswrapper[4687]: I0131 07:00:50.037744 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:50 crc kubenswrapper[4687]: I0131 07:00:50.037847 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:50 crc kubenswrapper[4687]: I0131 07:00:50.038552 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:50 crc kubenswrapper[4687]: I0131 07:00:50.038904 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:50 crc kubenswrapper[4687]: I0131 07:00:50.057035 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qzj\" (UniqueName: \"kubernetes.io/projected/838dbbef-88b2-4605-9482-2628852377fa-kube-api-access-d7qzj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:50 crc kubenswrapper[4687]: I0131 07:00:50.240218 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:50 crc kubenswrapper[4687]: I0131 07:00:50.445737 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz"] Jan 31 07:00:51 crc kubenswrapper[4687]: I0131 07:00:51.233704 4687 generic.go:334] "Generic (PLEG): container finished" podID="838dbbef-88b2-4605-9482-2628852377fa" containerID="9886bd63b568d25087a7170d872bce974b58edbb125fb7fd4aceddfda23d43b4" exitCode=0 Jan 31 07:00:51 crc kubenswrapper[4687]: I0131 07:00:51.233774 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" event={"ID":"838dbbef-88b2-4605-9482-2628852377fa","Type":"ContainerDied","Data":"9886bd63b568d25087a7170d872bce974b58edbb125fb7fd4aceddfda23d43b4"} Jan 31 07:00:51 crc kubenswrapper[4687]: I0131 07:00:51.233982 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" event={"ID":"838dbbef-88b2-4605-9482-2628852377fa","Type":"ContainerStarted","Data":"725495917b2bc39699bb62e83980bbed4b189694c3b065b71392b5413489a02c"} Jan 31 07:00:51 crc kubenswrapper[4687]: I0131 07:00:51.576167 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:51 crc kubenswrapper[4687]: I0131 07:00:51.614923 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.455676 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-69t9n"] Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.463013 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.467225 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-69t9n"] Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.567708 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-catalog-content\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.567769 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2bh\" (UniqueName: \"kubernetes.io/projected/abe4784c-b61b-4947-a9e6-3375ec34a695-kube-api-access-xr2bh\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.567790 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-utilities\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.669188 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2bh\" (UniqueName: \"kubernetes.io/projected/abe4784c-b61b-4947-a9e6-3375ec34a695-kube-api-access-xr2bh\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.669359 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-utilities\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.670293 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-catalog-content\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.670681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-utilities\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.670634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-catalog-content\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.690883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2bh\" (UniqueName: \"kubernetes.io/projected/abe4784c-b61b-4947-a9e6-3375ec34a695-kube-api-access-xr2bh\") pod \"redhat-operators-69t9n\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.787182 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:00:52 crc kubenswrapper[4687]: I0131 07:00:52.978307 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-69t9n"] Jan 31 07:00:53 crc kubenswrapper[4687]: I0131 07:00:53.244618 4687 generic.go:334] "Generic (PLEG): container finished" podID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerID="a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b" exitCode=0 Jan 31 07:00:53 crc kubenswrapper[4687]: I0131 07:00:53.244697 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69t9n" event={"ID":"abe4784c-b61b-4947-a9e6-3375ec34a695","Type":"ContainerDied","Data":"a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b"} Jan 31 07:00:53 crc kubenswrapper[4687]: I0131 07:00:53.245529 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69t9n" event={"ID":"abe4784c-b61b-4947-a9e6-3375ec34a695","Type":"ContainerStarted","Data":"26a72bf3ce24902a396d58c5897e6f3b07a0081eb42dc8ec2a8fcdf42cc88b96"} Jan 31 07:00:53 crc kubenswrapper[4687]: I0131 07:00:53.247899 4687 generic.go:334] "Generic (PLEG): container finished" podID="838dbbef-88b2-4605-9482-2628852377fa" containerID="2dbf0d7dd2f1a3804615820f3577e4acc15ed8e7ace3eff749ae2ee9f30796e4" exitCode=0 Jan 31 07:00:53 crc kubenswrapper[4687]: I0131 07:00:53.247931 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" event={"ID":"838dbbef-88b2-4605-9482-2628852377fa","Type":"ContainerDied","Data":"2dbf0d7dd2f1a3804615820f3577e4acc15ed8e7ace3eff749ae2ee9f30796e4"} Jan 31 07:00:54 crc kubenswrapper[4687]: I0131 07:00:54.255492 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69t9n" event={"ID":"abe4784c-b61b-4947-a9e6-3375ec34a695","Type":"ContainerStarted","Data":"1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af"} Jan 31 07:00:54 crc kubenswrapper[4687]: I0131 07:00:54.259259 4687 generic.go:334] "Generic (PLEG): container finished" podID="838dbbef-88b2-4605-9482-2628852377fa" containerID="0f74b084d4b76057473dee28bf2a08919c0e30554edc010d9e702ee3b93b84c5" exitCode=0 Jan 31 07:00:54 crc kubenswrapper[4687]: I0131 07:00:54.259301 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" event={"ID":"838dbbef-88b2-4605-9482-2628852377fa","Type":"ContainerDied","Data":"0f74b084d4b76057473dee28bf2a08919c0e30554edc010d9e702ee3b93b84c5"} Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.047975 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m555d"] Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.048196 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m555d" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="registry-server" containerID="cri-o://80363433b938133af17959e6c372a0ec347842659fafeba331f3dca58321852f" gracePeriod=2 Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.266613 4687 generic.go:334] "Generic (PLEG): container finished" podID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerID="1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af" exitCode=0 Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.266700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69t9n" event={"ID":"abe4784c-b61b-4947-a9e6-3375ec34a695","Type":"ContainerDied","Data":"1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af"} Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.269367 4687 generic.go:334] "Generic (PLEG): container finished" podID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerID="80363433b938133af17959e6c372a0ec347842659fafeba331f3dca58321852f" exitCode=0 Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.269526 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m555d" event={"ID":"0cd756bb-c628-4667-bfc0-8eaa2fe6b856","Type":"ContainerDied","Data":"80363433b938133af17959e6c372a0ec347842659fafeba331f3dca58321852f"} Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.472580 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.609540 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7qzj\" (UniqueName: \"kubernetes.io/projected/838dbbef-88b2-4605-9482-2628852377fa-kube-api-access-d7qzj\") pod \"838dbbef-88b2-4605-9482-2628852377fa\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.610106 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-bundle\") pod \"838dbbef-88b2-4605-9482-2628852377fa\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.610191 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-util\") pod \"838dbbef-88b2-4605-9482-2628852377fa\" (UID: \"838dbbef-88b2-4605-9482-2628852377fa\") " Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.611100 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-bundle" (OuterVolumeSpecName: "bundle") pod "838dbbef-88b2-4605-9482-2628852377fa" (UID: "838dbbef-88b2-4605-9482-2628852377fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.614974 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838dbbef-88b2-4605-9482-2628852377fa-kube-api-access-d7qzj" (OuterVolumeSpecName: "kube-api-access-d7qzj") pod "838dbbef-88b2-4605-9482-2628852377fa" (UID: "838dbbef-88b2-4605-9482-2628852377fa"). InnerVolumeSpecName "kube-api-access-d7qzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.624889 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-util" (OuterVolumeSpecName: "util") pod "838dbbef-88b2-4605-9482-2628852377fa" (UID: "838dbbef-88b2-4605-9482-2628852377fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.711894 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7qzj\" (UniqueName: \"kubernetes.io/projected/838dbbef-88b2-4605-9482-2628852377fa-kube-api-access-d7qzj\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.712167 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:55 crc kubenswrapper[4687]: I0131 07:00:55.712246 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/838dbbef-88b2-4605-9482-2628852377fa-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.026180 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.118700 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-utilities\") pod \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.118750 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-catalog-content\") pod \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.118790 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5zfr\" (UniqueName: \"kubernetes.io/projected/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-kube-api-access-d5zfr\") pod \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\" (UID: \"0cd756bb-c628-4667-bfc0-8eaa2fe6b856\") " Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.120254 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-utilities" (OuterVolumeSpecName: "utilities") pod "0cd756bb-c628-4667-bfc0-8eaa2fe6b856" (UID: "0cd756bb-c628-4667-bfc0-8eaa2fe6b856"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.122872 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-kube-api-access-d5zfr" (OuterVolumeSpecName: "kube-api-access-d5zfr") pod "0cd756bb-c628-4667-bfc0-8eaa2fe6b856" (UID: "0cd756bb-c628-4667-bfc0-8eaa2fe6b856"). InnerVolumeSpecName "kube-api-access-d5zfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.169108 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cd756bb-c628-4667-bfc0-8eaa2fe6b856" (UID: "0cd756bb-c628-4667-bfc0-8eaa2fe6b856"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.219850 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.219914 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.219925 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5zfr\" (UniqueName: \"kubernetes.io/projected/0cd756bb-c628-4667-bfc0-8eaa2fe6b856-kube-api-access-d5zfr\") on node \"crc\" DevicePath \"\"" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.276749 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" event={"ID":"838dbbef-88b2-4605-9482-2628852377fa","Type":"ContainerDied","Data":"725495917b2bc39699bb62e83980bbed4b189694c3b065b71392b5413489a02c"} Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.276812 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="725495917b2bc39699bb62e83980bbed4b189694c3b065b71392b5413489a02c" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.276777 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.281039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m555d" event={"ID":"0cd756bb-c628-4667-bfc0-8eaa2fe6b856","Type":"ContainerDied","Data":"5b3f048fb79c4b004a8b0882e6fbd1821098f8d3ffd5a37ce99633582768a9ab"} Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.281085 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m555d" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.281115 4687 scope.go:117] "RemoveContainer" containerID="80363433b938133af17959e6c372a0ec347842659fafeba331f3dca58321852f" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.298814 4687 scope.go:117] "RemoveContainer" containerID="b429759d6ce01cc3f2489323faa1bd3f146e60ff945b5cae41debbe670920239" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.318864 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m555d"] Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.325366 4687 scope.go:117] "RemoveContainer" containerID="37f8bf3e1d517c1caabaa01c3879b8b5f46e67653af82cded4f560d4556d3ea0" Jan 31 07:00:56 crc kubenswrapper[4687]: I0131 07:00:56.325965 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m555d"] Jan 31 07:00:57 crc kubenswrapper[4687]: I0131 07:00:57.293460 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69t9n" event={"ID":"abe4784c-b61b-4947-a9e6-3375ec34a695","Type":"ContainerStarted","Data":"32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7"} Jan 31 07:00:57 crc kubenswrapper[4687]: I0131 07:00:57.312070 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-69t9n" podStartSLOduration=2.522732174 podStartE2EDuration="5.312052681s" podCreationTimestamp="2026-01-31 07:00:52 +0000 UTC" firstStartedPulling="2026-01-31 07:00:53.246040815 +0000 UTC m=+1079.523300390" lastFinishedPulling="2026-01-31 07:00:56.035361332 +0000 UTC m=+1082.312620897" observedRunningTime="2026-01-31 07:00:57.308951066 +0000 UTC m=+1083.586210641" watchObservedRunningTime="2026-01-31 07:00:57.312052681 +0000 UTC m=+1083.589312256" Jan 31 07:00:57 crc kubenswrapper[4687]: I0131 07:00:57.610824 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" path="/var/lib/kubelet/pods/0cd756bb-c628-4667-bfc0-8eaa2fe6b856/volumes" Jan 31 07:01:02 crc kubenswrapper[4687]: I0131 07:01:02.787656 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:01:02 crc kubenswrapper[4687]: I0131 07:01:02.788257 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:01:02 crc kubenswrapper[4687]: I0131 07:01:02.840554 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:01:03 crc kubenswrapper[4687]: I0131 07:01:03.353964 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.256075 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-69t9n"] Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.325214 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-69t9n" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerName="registry-server" containerID="cri-o://32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7" gracePeriod=2 Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.391218 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn"] Jan 31 07:01:05 crc kubenswrapper[4687]: E0131 07:01:05.392056 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="extract-utilities" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.392148 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="extract-utilities" Jan 31 07:01:05 crc kubenswrapper[4687]: E0131 07:01:05.392228 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838dbbef-88b2-4605-9482-2628852377fa" containerName="util" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.392297 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="838dbbef-88b2-4605-9482-2628852377fa" containerName="util" Jan 31 07:01:05 crc kubenswrapper[4687]: E0131 07:01:05.392379 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838dbbef-88b2-4605-9482-2628852377fa" containerName="extract" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.392477 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="838dbbef-88b2-4605-9482-2628852377fa" containerName="extract" Jan 31 07:01:05 crc kubenswrapper[4687]: E0131 07:01:05.392549 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="extract-content" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.392618 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="extract-content" Jan 31 07:01:05 crc kubenswrapper[4687]: E0131 07:01:05.392701 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838dbbef-88b2-4605-9482-2628852377fa" containerName="pull" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.392773 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="838dbbef-88b2-4605-9482-2628852377fa" containerName="pull" Jan 31 07:01:05 crc kubenswrapper[4687]: E0131 07:01:05.392850 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="registry-server" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.392923 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="registry-server" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.393105 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="838dbbef-88b2-4605-9482-2628852377fa" containerName="extract" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.393188 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd756bb-c628-4667-bfc0-8eaa2fe6b856" containerName="registry-server" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.393750 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.396607 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.396637 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.397079 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.398843 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.399639 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mz78j" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.415930 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn"] Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.542003 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.542055 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-webhook-cert\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.542082 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ptw\" (UniqueName: \"kubernetes.io/projected/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-kube-api-access-r2ptw\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.643083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.643145 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-webhook-cert\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.643179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ptw\" (UniqueName: \"kubernetes.io/projected/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-kube-api-access-r2ptw\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.650934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-webhook-cert\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.650992 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.670858 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ptw\" (UniqueName: \"kubernetes.io/projected/56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e-kube-api-access-r2ptw\") pod \"metallb-operator-controller-manager-6bc67c7795-gjjmn\" (UID: \"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e\") " pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.711112 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.720790 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd"] Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.721453 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.726024 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sd9jl" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.726283 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.726457 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.751716 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd"] Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.806307 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.848240 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad709481-acec-41f1-af1d-3c84b69f7b2f-apiservice-cert\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.848327 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4npn\" (UniqueName: \"kubernetes.io/projected/ad709481-acec-41f1-af1d-3c84b69f7b2f-kube-api-access-x4npn\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.848366 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad709481-acec-41f1-af1d-3c84b69f7b2f-webhook-cert\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.948951 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr2bh\" (UniqueName: \"kubernetes.io/projected/abe4784c-b61b-4947-a9e6-3375ec34a695-kube-api-access-xr2bh\") pod \"abe4784c-b61b-4947-a9e6-3375ec34a695\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.949005 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-utilities\") pod \"abe4784c-b61b-4947-a9e6-3375ec34a695\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.949062 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-catalog-content\") pod \"abe4784c-b61b-4947-a9e6-3375ec34a695\" (UID: \"abe4784c-b61b-4947-a9e6-3375ec34a695\") " Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.949290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad709481-acec-41f1-af1d-3c84b69f7b2f-apiservice-cert\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.949361 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4npn\" (UniqueName: \"kubernetes.io/projected/ad709481-acec-41f1-af1d-3c84b69f7b2f-kube-api-access-x4npn\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.949385 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad709481-acec-41f1-af1d-3c84b69f7b2f-webhook-cert\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.950964 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-utilities" (OuterVolumeSpecName: "utilities") pod "abe4784c-b61b-4947-a9e6-3375ec34a695" (UID: "abe4784c-b61b-4947-a9e6-3375ec34a695"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.954778 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad709481-acec-41f1-af1d-3c84b69f7b2f-apiservice-cert\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.955140 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad709481-acec-41f1-af1d-3c84b69f7b2f-webhook-cert\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.955770 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe4784c-b61b-4947-a9e6-3375ec34a695-kube-api-access-xr2bh" (OuterVolumeSpecName: "kube-api-access-xr2bh") pod "abe4784c-b61b-4947-a9e6-3375ec34a695" (UID: "abe4784c-b61b-4947-a9e6-3375ec34a695"). InnerVolumeSpecName "kube-api-access-xr2bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.974674 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4npn\" (UniqueName: \"kubernetes.io/projected/ad709481-acec-41f1-af1d-3c84b69f7b2f-kube-api-access-x4npn\") pod \"metallb-operator-webhook-server-69bb4c5fc8-6rcfd\" (UID: \"ad709481-acec-41f1-af1d-3c84b69f7b2f\") " pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:05 crc kubenswrapper[4687]: I0131 07:01:05.988311 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn"] Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.050307 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr2bh\" (UniqueName: \"kubernetes.io/projected/abe4784c-b61b-4947-a9e6-3375ec34a695-kube-api-access-xr2bh\") on node \"crc\" DevicePath \"\"" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.050340 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.077567 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abe4784c-b61b-4947-a9e6-3375ec34a695" (UID: "abe4784c-b61b-4947-a9e6-3375ec34a695"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.102643 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.151571 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe4784c-b61b-4947-a9e6-3375ec34a695-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.350360 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" event={"ID":"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e","Type":"ContainerStarted","Data":"f81dd4c31c20cfdd106f79a26e2a9cea4e0ce2d48edb7181dd3f9d6bb4a30530"} Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.353057 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69t9n" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.355044 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69t9n" event={"ID":"abe4784c-b61b-4947-a9e6-3375ec34a695","Type":"ContainerDied","Data":"32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7"} Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.355126 4687 scope.go:117] "RemoveContainer" containerID="32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.373871 4687 scope.go:117] "RemoveContainer" containerID="1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.352945 4687 generic.go:334] "Generic (PLEG): container finished" podID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerID="32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7" exitCode=0 Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.373989 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69t9n" event={"ID":"abe4784c-b61b-4947-a9e6-3375ec34a695","Type":"ContainerDied","Data":"26a72bf3ce24902a396d58c5897e6f3b07a0081eb42dc8ec2a8fcdf42cc88b96"} Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.401540 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-69t9n"] Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.406393 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-69t9n"] Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.428889 4687 scope.go:117] "RemoveContainer" containerID="a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.441332 4687 scope.go:117] "RemoveContainer" containerID="32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7" Jan 31 07:01:06 crc kubenswrapper[4687]: E0131 07:01:06.441976 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7\": container with ID starting with 32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7 not found: ID does not exist" containerID="32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.442007 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7"} err="failed to get container status \"32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7\": rpc error: code = NotFound desc = could not find container \"32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7\": container with ID starting with 32f0cafb33418bd95129d2b2c0bfe0ec74bd606efbb13e0c8112267d8106dfd7 not found: ID does not exist" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.442026 4687 scope.go:117] "RemoveContainer" containerID="1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af" Jan 31 07:01:06 crc kubenswrapper[4687]: E0131 07:01:06.442269 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af\": container with ID starting with 1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af not found: ID does not exist" containerID="1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.442290 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af"} err="failed to get container status \"1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af\": rpc error: code = NotFound desc = could not find container \"1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af\": container with ID starting with 1f1a26b9d1681fcf6d8cb6798f93b962d9576890ddb1ce2d5805359d3a0c64af not found: ID does not exist" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.442303 4687 scope.go:117] "RemoveContainer" containerID="a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b" Jan 31 07:01:06 crc kubenswrapper[4687]: E0131 07:01:06.442633 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b\": container with ID starting with a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b not found: ID does not exist" containerID="a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.442653 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b"} err="failed to get container status \"a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b\": rpc error: code = NotFound desc = could not find container \"a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b\": container with ID starting with a59dcf0634dc7f4203d83afe756f90b913609a58c748fb541e780bcfffe3849b not found: ID does not exist" Jan 31 07:01:06 crc kubenswrapper[4687]: I0131 07:01:06.584501 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd"] Jan 31 07:01:07 crc kubenswrapper[4687]: I0131 07:01:07.381500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" event={"ID":"ad709481-acec-41f1-af1d-3c84b69f7b2f","Type":"ContainerStarted","Data":"8bf77cdccbe242dcd99a635ad0efa923d4b6df079d4d8d2a7492062f49fa54d2"} Jan 31 07:01:07 crc kubenswrapper[4687]: I0131 07:01:07.619280 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" path="/var/lib/kubelet/pods/abe4784c-b61b-4947-a9e6-3375ec34a695/volumes" Jan 31 07:01:09 crc kubenswrapper[4687]: I0131 07:01:09.401504 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" event={"ID":"56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e","Type":"ContainerStarted","Data":"513a8c62bb854d20cbefa55716a83c550aed2850d404d9300702daa4066a0b5f"} Jan 31 07:01:09 crc kubenswrapper[4687]: I0131 07:01:09.401824 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:09 crc kubenswrapper[4687]: I0131 07:01:09.421932 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" podStartSLOduration=1.393862735 podStartE2EDuration="4.421901167s" podCreationTimestamp="2026-01-31 07:01:05 +0000 UTC" firstStartedPulling="2026-01-31 07:01:06.001162941 +0000 UTC m=+1092.278422536" lastFinishedPulling="2026-01-31 07:01:09.029201393 +0000 UTC m=+1095.306460968" observedRunningTime="2026-01-31 07:01:09.419869311 +0000 UTC m=+1095.697128886" watchObservedRunningTime="2026-01-31 07:01:09.421901167 +0000 UTC m=+1095.699160752" Jan 31 07:01:11 crc kubenswrapper[4687]: I0131 07:01:11.414219 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" event={"ID":"ad709481-acec-41f1-af1d-3c84b69f7b2f","Type":"ContainerStarted","Data":"f7b332e9d4df14d17fb8ef0a67238f8a1e91057a14227cc308722edc5ef32a6d"} Jan 31 07:01:11 crc kubenswrapper[4687]: I0131 07:01:11.414572 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:11 crc kubenswrapper[4687]: I0131 07:01:11.433094 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" podStartSLOduration=2.303657118 podStartE2EDuration="6.433076856s" podCreationTimestamp="2026-01-31 07:01:05 +0000 UTC" firstStartedPulling="2026-01-31 07:01:06.597563798 +0000 UTC m=+1092.874823373" lastFinishedPulling="2026-01-31 07:01:10.726983536 +0000 UTC m=+1097.004243111" observedRunningTime="2026-01-31 07:01:11.432168671 +0000 UTC m=+1097.709428266" watchObservedRunningTime="2026-01-31 07:01:11.433076856 +0000 UTC m=+1097.710336431" Jan 31 07:01:26 crc kubenswrapper[4687]: I0131 07:01:26.107967 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-69bb4c5fc8-6rcfd" Jan 31 07:01:28 crc kubenswrapper[4687]: I0131 07:01:28.684423 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:01:28 crc kubenswrapper[4687]: I0131 07:01:28.685566 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:01:45 crc kubenswrapper[4687]: I0131 07:01:45.714978 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6bc67c7795-gjjmn" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.381626 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kmhqd"] Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.381875 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerName="extract-content" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.381887 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerName="extract-content" Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.381898 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerName="extract-utilities" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.381904 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerName="extract-utilities" Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.381918 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerName="registry-server" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.381923 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerName="registry-server" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.382013 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe4784c-b61b-4947-a9e6-3375ec34a695" containerName="registry-server" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.383821 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.385000 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth"] Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.385690 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.386653 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bs42c" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.386811 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.386935 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.387565 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.393552 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth"] Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.465935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-sockets\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.466019 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-reloader\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.466096 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5068efd9-cefe-48eb-96ff-886c9592c7c2-metrics-certs\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.466215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq2hg\" (UniqueName: \"kubernetes.io/projected/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-kube-api-access-cq2hg\") pod \"frr-k8s-webhook-server-7df86c4f6c-95vth\" (UID: \"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.466318 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-metrics\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.466381 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-startup\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.466565 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-95vth\" (UID: \"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.466588 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fc2k\" (UniqueName: \"kubernetes.io/projected/5068efd9-cefe-48eb-96ff-886c9592c7c2-kube-api-access-9fc2k\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.466685 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-conf\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.515456 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cqvh6"] Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.516665 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.519155 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-mlbgs"] Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.523949 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.524009 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wzxnv" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.524200 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.524651 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.533307 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mlbgs"] Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.533429 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.535654 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567480 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-95vth\" (UID: \"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567514 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fc2k\" (UniqueName: \"kubernetes.io/projected/5068efd9-cefe-48eb-96ff-886c9592c7c2-kube-api-access-9fc2k\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-conf\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567567 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-sockets\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567588 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-reloader\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5068efd9-cefe-48eb-96ff-886c9592c7c2-metrics-certs\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567633 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq2hg\" (UniqueName: \"kubernetes.io/projected/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-kube-api-access-cq2hg\") pod \"frr-k8s-webhook-server-7df86c4f6c-95vth\" (UID: \"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567661 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-metrics\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.567682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-startup\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.568202 4687 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.568311 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-cert podName:e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9 nodeName:}" failed. No retries permitted until 2026-01-31 07:01:47.068288368 +0000 UTC m=+1133.345548003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-cert") pod "frr-k8s-webhook-server-7df86c4f6c-95vth" (UID: "e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9") : secret "frr-k8s-webhook-server-cert" not found Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.568801 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-startup\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.569203 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-sockets\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.569382 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-frr-conf\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.569576 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-reloader\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.569903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5068efd9-cefe-48eb-96ff-886c9592c7c2-metrics\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.586087 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5068efd9-cefe-48eb-96ff-886c9592c7c2-metrics-certs\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.608084 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq2hg\" (UniqueName: \"kubernetes.io/projected/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-kube-api-access-cq2hg\") pod \"frr-k8s-webhook-server-7df86c4f6c-95vth\" (UID: \"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.613359 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fc2k\" (UniqueName: \"kubernetes.io/projected/5068efd9-cefe-48eb-96ff-886c9592c7c2-kube-api-access-9fc2k\") pod \"frr-k8s-kmhqd\" (UID: \"5068efd9-cefe-48eb-96ff-886c9592c7c2\") " pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.669362 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-cert\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.669451 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8cacba96-9df5-43d5-8e68-2a66b3dc0806-metallb-excludel2\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.669480 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbmlq\" (UniqueName: \"kubernetes.io/projected/8cacba96-9df5-43d5-8e68-2a66b3dc0806-kube-api-access-nbmlq\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.670088 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-metrics-certs\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.670199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kbsp\" (UniqueName: \"kubernetes.io/projected/fafa13d1-be81-401e-bb57-ad4e391192c2-kube-api-access-2kbsp\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.670242 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-metrics-certs\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.670302 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.715701 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.771151 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-metrics-certs\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.771203 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kbsp\" (UniqueName: \"kubernetes.io/projected/fafa13d1-be81-401e-bb57-ad4e391192c2-kube-api-access-2kbsp\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.771227 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-metrics-certs\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.771275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.771309 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-cert\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.771330 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8cacba96-9df5-43d5-8e68-2a66b3dc0806-metallb-excludel2\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.771347 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbmlq\" (UniqueName: \"kubernetes.io/projected/8cacba96-9df5-43d5-8e68-2a66b3dc0806-kube-api-access-nbmlq\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.771441 4687 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.771528 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-metrics-certs podName:fafa13d1-be81-401e-bb57-ad4e391192c2 nodeName:}" failed. No retries permitted until 2026-01-31 07:01:47.27150918 +0000 UTC m=+1133.548768755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-metrics-certs") pod "controller-6968d8fdc4-mlbgs" (UID: "fafa13d1-be81-401e-bb57-ad4e391192c2") : secret "controller-certs-secret" not found Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.771687 4687 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 07:01:46 crc kubenswrapper[4687]: E0131 07:01:46.771863 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist podName:8cacba96-9df5-43d5-8e68-2a66b3dc0806 nodeName:}" failed. No retries permitted until 2026-01-31 07:01:47.271843739 +0000 UTC m=+1133.549103344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist") pod "speaker-cqvh6" (UID: "8cacba96-9df5-43d5-8e68-2a66b3dc0806") : secret "metallb-memberlist" not found Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.772226 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8cacba96-9df5-43d5-8e68-2a66b3dc0806-metallb-excludel2\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.774296 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-metrics-certs\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.774904 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-cert\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.787571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbmlq\" (UniqueName: \"kubernetes.io/projected/8cacba96-9df5-43d5-8e68-2a66b3dc0806-kube-api-access-nbmlq\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:46 crc kubenswrapper[4687]: I0131 07:01:46.788449 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kbsp\" (UniqueName: \"kubernetes.io/projected/fafa13d1-be81-401e-bb57-ad4e391192c2-kube-api-access-2kbsp\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.074453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-95vth\" (UID: \"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.078285 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-95vth\" (UID: \"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.278846 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-metrics-certs\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.278924 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:47 crc kubenswrapper[4687]: E0131 07:01:47.279050 4687 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 07:01:47 crc kubenswrapper[4687]: E0131 07:01:47.279149 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist podName:8cacba96-9df5-43d5-8e68-2a66b3dc0806 nodeName:}" failed. No retries permitted until 2026-01-31 07:01:48.279124312 +0000 UTC m=+1134.556383887 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist") pod "speaker-cqvh6" (UID: "8cacba96-9df5-43d5-8e68-2a66b3dc0806") : secret "metallb-memberlist" not found Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.283667 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fafa13d1-be81-401e-bb57-ad4e391192c2-metrics-certs\") pod \"controller-6968d8fdc4-mlbgs\" (UID: \"fafa13d1-be81-401e-bb57-ad4e391192c2\") " pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.326707 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.471944 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.505366 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth"] Jan 31 07:01:47 crc kubenswrapper[4687]: W0131 07:01:47.512680 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6b3e6b5_b5bc_4cc2_9987_c55bb71c29c9.slice/crio-4195ee135ce644346cbc87ae34f64603c29fa3391f614b508c478711dbd9408a WatchSource:0}: Error finding container 4195ee135ce644346cbc87ae34f64603c29fa3391f614b508c478711dbd9408a: Status 404 returned error can't find the container with id 4195ee135ce644346cbc87ae34f64603c29fa3391f614b508c478711dbd9408a Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.617246 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerStarted","Data":"6e4a25dac1ea10baaeeca6e274369cefcfa8dbd32ab0db04ea952b2d2667633f"} Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.618157 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" event={"ID":"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9","Type":"ContainerStarted","Data":"4195ee135ce644346cbc87ae34f64603c29fa3391f614b508c478711dbd9408a"} Jan 31 07:01:47 crc kubenswrapper[4687]: I0131 07:01:47.659713 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mlbgs"] Jan 31 07:01:48 crc kubenswrapper[4687]: I0131 07:01:48.291548 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:48 crc kubenswrapper[4687]: I0131 07:01:48.311457 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8cacba96-9df5-43d5-8e68-2a66b3dc0806-memberlist\") pod \"speaker-cqvh6\" (UID: \"8cacba96-9df5-43d5-8e68-2a66b3dc0806\") " pod="metallb-system/speaker-cqvh6" Jan 31 07:01:48 crc kubenswrapper[4687]: I0131 07:01:48.347119 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cqvh6" Jan 31 07:01:48 crc kubenswrapper[4687]: I0131 07:01:48.654631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cqvh6" event={"ID":"8cacba96-9df5-43d5-8e68-2a66b3dc0806","Type":"ContainerStarted","Data":"2fffb4837c8adfa10d1930e1837695bcdb729416b9fec3b8771926ff1fd58790"} Jan 31 07:01:48 crc kubenswrapper[4687]: I0131 07:01:48.674212 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mlbgs" event={"ID":"fafa13d1-be81-401e-bb57-ad4e391192c2","Type":"ContainerStarted","Data":"e8a2383efdd503ca4584b0c8963d4981c9ee85173c09f0e112ec9717c31ee265"} Jan 31 07:01:48 crc kubenswrapper[4687]: I0131 07:01:48.674267 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mlbgs" event={"ID":"fafa13d1-be81-401e-bb57-ad4e391192c2","Type":"ContainerStarted","Data":"b01fa29a4d6925da68367bc0e88d7085f00655b57ac7bc567bc1459cbb06e9d6"} Jan 31 07:01:49 crc kubenswrapper[4687]: I0131 07:01:49.693157 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cqvh6" event={"ID":"8cacba96-9df5-43d5-8e68-2a66b3dc0806","Type":"ContainerStarted","Data":"35a5dd891cc3b5c638a0ce6766bf5b9b78a788b1307dfaf094d874f8ef5d240b"} Jan 31 07:01:51 crc kubenswrapper[4687]: I0131 07:01:51.706812 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cqvh6" event={"ID":"8cacba96-9df5-43d5-8e68-2a66b3dc0806","Type":"ContainerStarted","Data":"044aed50faa904d84707e34b8d068b34ed6419823aae38df9b41159b002eb015"} Jan 31 07:01:51 crc kubenswrapper[4687]: I0131 07:01:51.707347 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cqvh6" Jan 31 07:01:51 crc kubenswrapper[4687]: I0131 07:01:51.713869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mlbgs" event={"ID":"fafa13d1-be81-401e-bb57-ad4e391192c2","Type":"ContainerStarted","Data":"6b573a4552e0061a07e539b10a70a3dfa8b614bb81d62054100458bb4ea838f4"} Jan 31 07:01:51 crc kubenswrapper[4687]: I0131 07:01:51.714042 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:51 crc kubenswrapper[4687]: I0131 07:01:51.742747 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cqvh6" podStartSLOduration=3.354729413 podStartE2EDuration="5.742718807s" podCreationTimestamp="2026-01-31 07:01:46 +0000 UTC" firstStartedPulling="2026-01-31 07:01:48.726065445 +0000 UTC m=+1135.003325020" lastFinishedPulling="2026-01-31 07:01:51.114054839 +0000 UTC m=+1137.391314414" observedRunningTime="2026-01-31 07:01:51.72577941 +0000 UTC m=+1138.003038995" watchObservedRunningTime="2026-01-31 07:01:51.742718807 +0000 UTC m=+1138.019978382" Jan 31 07:01:51 crc kubenswrapper[4687]: I0131 07:01:51.764054 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-mlbgs" podStartSLOduration=2.445103511 podStartE2EDuration="5.764030104s" podCreationTimestamp="2026-01-31 07:01:46 +0000 UTC" firstStartedPulling="2026-01-31 07:01:47.786608711 +0000 UTC m=+1134.063868296" lastFinishedPulling="2026-01-31 07:01:51.105535314 +0000 UTC m=+1137.382794889" observedRunningTime="2026-01-31 07:01:51.756726453 +0000 UTC m=+1138.033986028" watchObservedRunningTime="2026-01-31 07:01:51.764030104 +0000 UTC m=+1138.041289679" Jan 31 07:01:54 crc kubenswrapper[4687]: I0131 07:01:54.731474 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" event={"ID":"e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9","Type":"ContainerStarted","Data":"e32fcec222aedb8e553b8acb4b0e95eabab4be3e7e55075e2741ec1fe99f47c7"} Jan 31 07:01:54 crc kubenswrapper[4687]: I0131 07:01:54.731878 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:01:54 crc kubenswrapper[4687]: I0131 07:01:54.733242 4687 generic.go:334] "Generic (PLEG): container finished" podID="5068efd9-cefe-48eb-96ff-886c9592c7c2" containerID="c66e89571c7787e479b6618e60fa81cceec1fd4590a19697d7cd35ced82ef6ac" exitCode=0 Jan 31 07:01:54 crc kubenswrapper[4687]: I0131 07:01:54.733294 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerDied","Data":"c66e89571c7787e479b6618e60fa81cceec1fd4590a19697d7cd35ced82ef6ac"} Jan 31 07:01:54 crc kubenswrapper[4687]: I0131 07:01:54.748863 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" podStartSLOduration=2.379632847 podStartE2EDuration="8.748839449s" podCreationTimestamp="2026-01-31 07:01:46 +0000 UTC" firstStartedPulling="2026-01-31 07:01:47.527951851 +0000 UTC m=+1133.805211436" lastFinishedPulling="2026-01-31 07:01:53.897158463 +0000 UTC m=+1140.174418038" observedRunningTime="2026-01-31 07:01:54.745485287 +0000 UTC m=+1141.022744892" watchObservedRunningTime="2026-01-31 07:01:54.748839449 +0000 UTC m=+1141.026099044" Jan 31 07:01:55 crc kubenswrapper[4687]: I0131 07:01:55.758389 4687 generic.go:334] "Generic (PLEG): container finished" podID="5068efd9-cefe-48eb-96ff-886c9592c7c2" containerID="16239dd4c089b9c7a51674fd6d5773375734b7434d2a2d52d7f7845fd6b62a5b" exitCode=0 Jan 31 07:01:55 crc kubenswrapper[4687]: I0131 07:01:55.758473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerDied","Data":"16239dd4c089b9c7a51674fd6d5773375734b7434d2a2d52d7f7845fd6b62a5b"} Jan 31 07:01:56 crc kubenswrapper[4687]: I0131 07:01:56.765924 4687 generic.go:334] "Generic (PLEG): container finished" podID="5068efd9-cefe-48eb-96ff-886c9592c7c2" containerID="c462eefd91007d84dfdb182565de352990c4741ce8d9ac14f31be5eeda7f6def" exitCode=0 Jan 31 07:01:56 crc kubenswrapper[4687]: I0131 07:01:56.765956 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerDied","Data":"c462eefd91007d84dfdb182565de352990c4741ce8d9ac14f31be5eeda7f6def"} Jan 31 07:01:57 crc kubenswrapper[4687]: I0131 07:01:57.477154 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-mlbgs" Jan 31 07:01:57 crc kubenswrapper[4687]: I0131 07:01:57.773879 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerStarted","Data":"6b1d9f96c7e2bb7729e2875ceffdacc26e68d25445cf6074c50cf22207fa33f4"} Jan 31 07:01:57 crc kubenswrapper[4687]: I0131 07:01:57.773921 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerStarted","Data":"273ca683d86f363640359b5cb572eaa558deae87f72122470f6332dda09a59c4"} Jan 31 07:01:57 crc kubenswrapper[4687]: I0131 07:01:57.773940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerStarted","Data":"5a6370dd3636d22fd6e1317c3db4ffd61e4b01ba02a52f0e45304019f42781cd"} Jan 31 07:01:58 crc kubenswrapper[4687]: I0131 07:01:58.351055 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cqvh6" Jan 31 07:01:58 crc kubenswrapper[4687]: I0131 07:01:58.683963 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:01:58 crc kubenswrapper[4687]: I0131 07:01:58.684285 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:01:58 crc kubenswrapper[4687]: I0131 07:01:58.788950 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerStarted","Data":"debb5512d035f27b1c66bd3d717475122cd46bfcb6adef25ab878a77fe2c8aae"} Jan 31 07:01:58 crc kubenswrapper[4687]: I0131 07:01:58.789562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerStarted","Data":"bed7e060450dceb6a0fa87ec08075615d241227864bb779eca4d0eae027dfabe"} Jan 31 07:01:58 crc kubenswrapper[4687]: I0131 07:01:58.789684 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kmhqd" event={"ID":"5068efd9-cefe-48eb-96ff-886c9592c7c2","Type":"ContainerStarted","Data":"0f98f2f118b276cc7e2ed9a31b15a83ce08fd463e08cec08fc1b7fb8137e9ea8"} Jan 31 07:01:58 crc kubenswrapper[4687]: I0131 07:01:58.790450 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:01:58 crc kubenswrapper[4687]: I0131 07:01:58.833055 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kmhqd" podStartSLOduration=5.803168623 podStartE2EDuration="12.833039886s" podCreationTimestamp="2026-01-31 07:01:46 +0000 UTC" firstStartedPulling="2026-01-31 07:01:46.852190244 +0000 UTC m=+1133.129449819" lastFinishedPulling="2026-01-31 07:01:53.882061507 +0000 UTC m=+1140.159321082" observedRunningTime="2026-01-31 07:01:58.825454927 +0000 UTC m=+1145.102714522" watchObservedRunningTime="2026-01-31 07:01:58.833039886 +0000 UTC m=+1145.110299461" Jan 31 07:02:01 crc kubenswrapper[4687]: I0131 07:02:01.716870 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:02:01 crc kubenswrapper[4687]: I0131 07:02:01.751249 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.215134 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-tb7qq"] Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.215876 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-tb7qq" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.218758 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-c4v4h" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.218942 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.218968 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.237448 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-tb7qq"] Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.338477 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp67x\" (UniqueName: \"kubernetes.io/projected/9af5d43b-ac94-41b6-8302-9249784adc9b-kube-api-access-tp67x\") pod \"mariadb-operator-index-tb7qq\" (UID: \"9af5d43b-ac94-41b6-8302-9249784adc9b\") " pod="openstack-operators/mariadb-operator-index-tb7qq" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.439377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp67x\" (UniqueName: \"kubernetes.io/projected/9af5d43b-ac94-41b6-8302-9249784adc9b-kube-api-access-tp67x\") pod \"mariadb-operator-index-tb7qq\" (UID: \"9af5d43b-ac94-41b6-8302-9249784adc9b\") " pod="openstack-operators/mariadb-operator-index-tb7qq" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.457152 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp67x\" (UniqueName: \"kubernetes.io/projected/9af5d43b-ac94-41b6-8302-9249784adc9b-kube-api-access-tp67x\") pod \"mariadb-operator-index-tb7qq\" (UID: \"9af5d43b-ac94-41b6-8302-9249784adc9b\") " pod="openstack-operators/mariadb-operator-index-tb7qq" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.532639 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-tb7qq" Jan 31 07:02:04 crc kubenswrapper[4687]: I0131 07:02:04.914603 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-tb7qq"] Jan 31 07:02:04 crc kubenswrapper[4687]: W0131 07:02:04.916788 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9af5d43b_ac94_41b6_8302_9249784adc9b.slice/crio-2339a6931269fc411b6feacf8fbf25d5b8c957c6bc4161b4f5d1cbf1e2b1382d WatchSource:0}: Error finding container 2339a6931269fc411b6feacf8fbf25d5b8c957c6bc4161b4f5d1cbf1e2b1382d: Status 404 returned error can't find the container with id 2339a6931269fc411b6feacf8fbf25d5b8c957c6bc4161b4f5d1cbf1e2b1382d Jan 31 07:02:05 crc kubenswrapper[4687]: I0131 07:02:05.825354 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-tb7qq" event={"ID":"9af5d43b-ac94-41b6-8302-9249784adc9b","Type":"ContainerStarted","Data":"2339a6931269fc411b6feacf8fbf25d5b8c957c6bc4161b4f5d1cbf1e2b1382d"} Jan 31 07:02:06 crc kubenswrapper[4687]: I0131 07:02:06.720076 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kmhqd" Jan 31 07:02:07 crc kubenswrapper[4687]: I0131 07:02:07.334148 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-95vth" Jan 31 07:02:07 crc kubenswrapper[4687]: I0131 07:02:07.574272 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-tb7qq"] Jan 31 07:02:08 crc kubenswrapper[4687]: I0131 07:02:08.185794 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-7rd2t"] Jan 31 07:02:08 crc kubenswrapper[4687]: I0131 07:02:08.186624 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:08 crc kubenswrapper[4687]: I0131 07:02:08.195424 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-7rd2t"] Jan 31 07:02:08 crc kubenswrapper[4687]: I0131 07:02:08.288106 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfsrx\" (UniqueName: \"kubernetes.io/projected/b26b5ca8-6e8a-41f4-bf71-822aef1f73bf-kube-api-access-tfsrx\") pod \"mariadb-operator-index-7rd2t\" (UID: \"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf\") " pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:08 crc kubenswrapper[4687]: I0131 07:02:08.389318 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfsrx\" (UniqueName: \"kubernetes.io/projected/b26b5ca8-6e8a-41f4-bf71-822aef1f73bf-kube-api-access-tfsrx\") pod \"mariadb-operator-index-7rd2t\" (UID: \"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf\") " pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:08 crc kubenswrapper[4687]: I0131 07:02:08.408868 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfsrx\" (UniqueName: \"kubernetes.io/projected/b26b5ca8-6e8a-41f4-bf71-822aef1f73bf-kube-api-access-tfsrx\") pod \"mariadb-operator-index-7rd2t\" (UID: \"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf\") " pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:08 crc kubenswrapper[4687]: I0131 07:02:08.513218 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:11 crc kubenswrapper[4687]: I0131 07:02:11.725915 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-7rd2t"] Jan 31 07:02:11 crc kubenswrapper[4687]: W0131 07:02:11.736274 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb26b5ca8_6e8a_41f4_bf71_822aef1f73bf.slice/crio-61984704ec68fd83e876e8eece6f46f6fd73dc003d27bfa0e590c31c4eecdc62 WatchSource:0}: Error finding container 61984704ec68fd83e876e8eece6f46f6fd73dc003d27bfa0e590c31c4eecdc62: Status 404 returned error can't find the container with id 61984704ec68fd83e876e8eece6f46f6fd73dc003d27bfa0e590c31c4eecdc62 Jan 31 07:02:11 crc kubenswrapper[4687]: I0131 07:02:11.861325 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-7rd2t" event={"ID":"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf","Type":"ContainerStarted","Data":"61984704ec68fd83e876e8eece6f46f6fd73dc003d27bfa0e590c31c4eecdc62"} Jan 31 07:02:11 crc kubenswrapper[4687]: I0131 07:02:11.863191 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-tb7qq" event={"ID":"9af5d43b-ac94-41b6-8302-9249784adc9b","Type":"ContainerStarted","Data":"f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8"} Jan 31 07:02:11 crc kubenswrapper[4687]: I0131 07:02:11.863384 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-tb7qq" podUID="9af5d43b-ac94-41b6-8302-9249784adc9b" containerName="registry-server" containerID="cri-o://f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8" gracePeriod=2 Jan 31 07:02:11 crc kubenswrapper[4687]: I0131 07:02:11.876630 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-tb7qq" podStartSLOduration=1.4760759509999999 podStartE2EDuration="7.876611288s" podCreationTimestamp="2026-01-31 07:02:04 +0000 UTC" firstStartedPulling="2026-01-31 07:02:04.919238882 +0000 UTC m=+1151.196498457" lastFinishedPulling="2026-01-31 07:02:11.319774219 +0000 UTC m=+1157.597033794" observedRunningTime="2026-01-31 07:02:11.876430483 +0000 UTC m=+1158.153690068" watchObservedRunningTime="2026-01-31 07:02:11.876611288 +0000 UTC m=+1158.153870883" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.209227 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-tb7qq" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.396209 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp67x\" (UniqueName: \"kubernetes.io/projected/9af5d43b-ac94-41b6-8302-9249784adc9b-kube-api-access-tp67x\") pod \"9af5d43b-ac94-41b6-8302-9249784adc9b\" (UID: \"9af5d43b-ac94-41b6-8302-9249784adc9b\") " Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.401518 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af5d43b-ac94-41b6-8302-9249784adc9b-kube-api-access-tp67x" (OuterVolumeSpecName: "kube-api-access-tp67x") pod "9af5d43b-ac94-41b6-8302-9249784adc9b" (UID: "9af5d43b-ac94-41b6-8302-9249784adc9b"). InnerVolumeSpecName "kube-api-access-tp67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.497193 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp67x\" (UniqueName: \"kubernetes.io/projected/9af5d43b-ac94-41b6-8302-9249784adc9b-kube-api-access-tp67x\") on node \"crc\" DevicePath \"\"" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.870116 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-7rd2t" event={"ID":"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf","Type":"ContainerStarted","Data":"fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e"} Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.871386 4687 generic.go:334] "Generic (PLEG): container finished" podID="9af5d43b-ac94-41b6-8302-9249784adc9b" containerID="f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8" exitCode=0 Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.871435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-tb7qq" event={"ID":"9af5d43b-ac94-41b6-8302-9249784adc9b","Type":"ContainerDied","Data":"f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8"} Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.871459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-tb7qq" event={"ID":"9af5d43b-ac94-41b6-8302-9249784adc9b","Type":"ContainerDied","Data":"2339a6931269fc411b6feacf8fbf25d5b8c957c6bc4161b4f5d1cbf1e2b1382d"} Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.871476 4687 scope.go:117] "RemoveContainer" containerID="f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.871481 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-tb7qq" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.896041 4687 scope.go:117] "RemoveContainer" containerID="f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8" Jan 31 07:02:12 crc kubenswrapper[4687]: E0131 07:02:12.896612 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8\": container with ID starting with f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8 not found: ID does not exist" containerID="f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.896662 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8"} err="failed to get container status \"f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8\": rpc error: code = NotFound desc = could not find container \"f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8\": container with ID starting with f5b5681cc58557d8a69df40f7c48283c0dda9f9a710f45782c73097f1159f1c8 not found: ID does not exist" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.901460 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-7rd2t" podStartSLOduration=4.395938443 podStartE2EDuration="4.901405046s" podCreationTimestamp="2026-01-31 07:02:08 +0000 UTC" firstStartedPulling="2026-01-31 07:02:11.74029754 +0000 UTC m=+1158.017557115" lastFinishedPulling="2026-01-31 07:02:12.245764143 +0000 UTC m=+1158.523023718" observedRunningTime="2026-01-31 07:02:12.892043988 +0000 UTC m=+1159.169303563" watchObservedRunningTime="2026-01-31 07:02:12.901405046 +0000 UTC m=+1159.178664641" Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.910752 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-tb7qq"] Jan 31 07:02:12 crc kubenswrapper[4687]: I0131 07:02:12.914679 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-tb7qq"] Jan 31 07:02:13 crc kubenswrapper[4687]: I0131 07:02:13.612526 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af5d43b-ac94-41b6-8302-9249784adc9b" path="/var/lib/kubelet/pods/9af5d43b-ac94-41b6-8302-9249784adc9b/volumes" Jan 31 07:02:18 crc kubenswrapper[4687]: I0131 07:02:18.513825 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:18 crc kubenswrapper[4687]: I0131 07:02:18.514499 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:18 crc kubenswrapper[4687]: I0131 07:02:18.543735 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:18 crc kubenswrapper[4687]: I0131 07:02:18.932784 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.383902 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm"] Jan 31 07:02:24 crc kubenswrapper[4687]: E0131 07:02:24.384392 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af5d43b-ac94-41b6-8302-9249784adc9b" containerName="registry-server" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.384417 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af5d43b-ac94-41b6-8302-9249784adc9b" containerName="registry-server" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.384544 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af5d43b-ac94-41b6-8302-9249784adc9b" containerName="registry-server" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.385256 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.394472 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm"] Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.399029 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9sffv" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.448284 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2x7m\" (UniqueName: \"kubernetes.io/projected/ce7e9c35-2d20-496b-bc0b-965d64cbd140-kube-api-access-s2x7m\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.448353 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.448486 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.549020 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.549087 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2x7m\" (UniqueName: \"kubernetes.io/projected/ce7e9c35-2d20-496b-bc0b-965d64cbd140-kube-api-access-s2x7m\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.549128 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.549646 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.549676 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.568240 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2x7m\" (UniqueName: \"kubernetes.io/projected/ce7e9c35-2d20-496b-bc0b-965d64cbd140-kube-api-access-s2x7m\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:24 crc kubenswrapper[4687]: I0131 07:02:24.706529 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:25 crc kubenswrapper[4687]: I0131 07:02:25.106315 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm"] Jan 31 07:02:25 crc kubenswrapper[4687]: I0131 07:02:25.951352 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerID="7ca01d48dbe92fb5a08f8e95f98f23cc491fb770dcba5ff32ffb86bf7778d0a3" exitCode=0 Jan 31 07:02:25 crc kubenswrapper[4687]: I0131 07:02:25.951398 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" event={"ID":"ce7e9c35-2d20-496b-bc0b-965d64cbd140","Type":"ContainerDied","Data":"7ca01d48dbe92fb5a08f8e95f98f23cc491fb770dcba5ff32ffb86bf7778d0a3"} Jan 31 07:02:25 crc kubenswrapper[4687]: I0131 07:02:25.951450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" event={"ID":"ce7e9c35-2d20-496b-bc0b-965d64cbd140","Type":"ContainerStarted","Data":"36a0566db5ee622761a347c70da49a0d9c8101e794d8a760e74b5dd8d7c0c43d"} Jan 31 07:02:26 crc kubenswrapper[4687]: I0131 07:02:26.958764 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerID="99a0276c7b1ffbc131d0da854990fcb0a24f2905e77e23c70ea6a702b971e7b5" exitCode=0 Jan 31 07:02:26 crc kubenswrapper[4687]: I0131 07:02:26.958817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" event={"ID":"ce7e9c35-2d20-496b-bc0b-965d64cbd140","Type":"ContainerDied","Data":"99a0276c7b1ffbc131d0da854990fcb0a24f2905e77e23c70ea6a702b971e7b5"} Jan 31 07:02:27 crc kubenswrapper[4687]: I0131 07:02:27.967834 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerID="852a1aca08758c98de5c971f20ee29e97affdf43fcb33f46751b7551f0b07044" exitCode=0 Jan 31 07:02:27 crc kubenswrapper[4687]: I0131 07:02:27.967916 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" event={"ID":"ce7e9c35-2d20-496b-bc0b-965d64cbd140","Type":"ContainerDied","Data":"852a1aca08758c98de5c971f20ee29e97affdf43fcb33f46751b7551f0b07044"} Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.684803 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.684864 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.684908 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.685451 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2870678d8ef3b4ce66abc3a889acd9cf6e04c0f95a1291bebaab2b0448491609"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.685512 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://2870678d8ef3b4ce66abc3a889acd9cf6e04c0f95a1291bebaab2b0448491609" gracePeriod=600 Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.975393 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="2870678d8ef3b4ce66abc3a889acd9cf6e04c0f95a1291bebaab2b0448491609" exitCode=0 Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.975473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"2870678d8ef3b4ce66abc3a889acd9cf6e04c0f95a1291bebaab2b0448491609"} Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.975758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"f4ad799ecadff0d9823e53b53153bf63acdd5cce54e7a1eb02184f7b2a6947f6"} Jan 31 07:02:28 crc kubenswrapper[4687]: I0131 07:02:28.975786 4687 scope.go:117] "RemoveContainer" containerID="0cd4248235582b525083ab077dd16b2a2243217ecf8d962c50ecbf6042075994" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.180060 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.301304 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2x7m\" (UniqueName: \"kubernetes.io/projected/ce7e9c35-2d20-496b-bc0b-965d64cbd140-kube-api-access-s2x7m\") pod \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.301421 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-util\") pod \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.301556 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-bundle\") pod \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\" (UID: \"ce7e9c35-2d20-496b-bc0b-965d64cbd140\") " Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.302274 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-bundle" (OuterVolumeSpecName: "bundle") pod "ce7e9c35-2d20-496b-bc0b-965d64cbd140" (UID: "ce7e9c35-2d20-496b-bc0b-965d64cbd140"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.306281 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7e9c35-2d20-496b-bc0b-965d64cbd140-kube-api-access-s2x7m" (OuterVolumeSpecName: "kube-api-access-s2x7m") pod "ce7e9c35-2d20-496b-bc0b-965d64cbd140" (UID: "ce7e9c35-2d20-496b-bc0b-965d64cbd140"). InnerVolumeSpecName "kube-api-access-s2x7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.317668 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-util" (OuterVolumeSpecName: "util") pod "ce7e9c35-2d20-496b-bc0b-965d64cbd140" (UID: "ce7e9c35-2d20-496b-bc0b-965d64cbd140"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.403275 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.403314 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2x7m\" (UniqueName: \"kubernetes.io/projected/ce7e9c35-2d20-496b-bc0b-965d64cbd140-kube-api-access-s2x7m\") on node \"crc\" DevicePath \"\"" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.403325 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7e9c35-2d20-496b-bc0b-965d64cbd140-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.983128 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.983091 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm" event={"ID":"ce7e9c35-2d20-496b-bc0b-965d64cbd140","Type":"ContainerDied","Data":"36a0566db5ee622761a347c70da49a0d9c8101e794d8a760e74b5dd8d7c0c43d"} Jan 31 07:02:29 crc kubenswrapper[4687]: I0131 07:02:29.983600 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a0566db5ee622761a347c70da49a0d9c8101e794d8a760e74b5dd8d7c0c43d" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.718918 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv"] Jan 31 07:02:32 crc kubenswrapper[4687]: E0131 07:02:32.719389 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerName="util" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.719400 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerName="util" Jan 31 07:02:32 crc kubenswrapper[4687]: E0131 07:02:32.719425 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerName="pull" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.719431 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerName="pull" Jan 31 07:02:32 crc kubenswrapper[4687]: E0131 07:02:32.719451 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerName="extract" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.719457 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerName="extract" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.719549 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" containerName="extract" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.719922 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.724347 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.728792 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-njx6c" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.728798 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.742610 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv"] Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.745100 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-webhook-cert\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.745174 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-apiservice-cert\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.745228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgb5p\" (UniqueName: \"kubernetes.io/projected/2f7bf014-81af-465e-a08f-f9a1dc8a7383-kube-api-access-hgb5p\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.846366 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-apiservice-cert\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.846505 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgb5p\" (UniqueName: \"kubernetes.io/projected/2f7bf014-81af-465e-a08f-f9a1dc8a7383-kube-api-access-hgb5p\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.846604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-webhook-cert\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.852094 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-webhook-cert\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.864174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgb5p\" (UniqueName: \"kubernetes.io/projected/2f7bf014-81af-465e-a08f-f9a1dc8a7383-kube-api-access-hgb5p\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:32 crc kubenswrapper[4687]: I0131 07:02:32.867210 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-apiservice-cert\") pod \"mariadb-operator-controller-manager-8d596dc7f-pc8lv\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:33 crc kubenswrapper[4687]: I0131 07:02:33.036378 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:33 crc kubenswrapper[4687]: I0131 07:02:33.429061 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv"] Jan 31 07:02:34 crc kubenswrapper[4687]: I0131 07:02:34.022348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" event={"ID":"2f7bf014-81af-465e-a08f-f9a1dc8a7383","Type":"ContainerStarted","Data":"d2f8a108c7c4ebab9518c9a6c9bd5820050542bd8e340623cdf100ccabb29418"} Jan 31 07:02:37 crc kubenswrapper[4687]: I0131 07:02:37.042885 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" event={"ID":"2f7bf014-81af-465e-a08f-f9a1dc8a7383","Type":"ContainerStarted","Data":"149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f"} Jan 31 07:02:37 crc kubenswrapper[4687]: I0131 07:02:37.043473 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:37 crc kubenswrapper[4687]: I0131 07:02:37.066014 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" podStartSLOduration=1.957123829 podStartE2EDuration="5.065986634s" podCreationTimestamp="2026-01-31 07:02:32 +0000 UTC" firstStartedPulling="2026-01-31 07:02:33.444526781 +0000 UTC m=+1179.721786356" lastFinishedPulling="2026-01-31 07:02:36.553389586 +0000 UTC m=+1182.830649161" observedRunningTime="2026-01-31 07:02:37.058871689 +0000 UTC m=+1183.336131314" watchObservedRunningTime="2026-01-31 07:02:37.065986634 +0000 UTC m=+1183.343246219" Jan 31 07:02:43 crc kubenswrapper[4687]: I0131 07:02:43.041784 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:02:45 crc kubenswrapper[4687]: I0131 07:02:45.818322 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-6cpr7"] Jan 31 07:02:45 crc kubenswrapper[4687]: I0131 07:02:45.819693 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:45 crc kubenswrapper[4687]: I0131 07:02:45.822205 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-4pwhr" Jan 31 07:02:45 crc kubenswrapper[4687]: I0131 07:02:45.833785 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-6cpr7"] Jan 31 07:02:45 crc kubenswrapper[4687]: I0131 07:02:45.903758 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pccqz\" (UniqueName: \"kubernetes.io/projected/f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c-kube-api-access-pccqz\") pod \"infra-operator-index-6cpr7\" (UID: \"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c\") " pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:46 crc kubenswrapper[4687]: I0131 07:02:46.004442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pccqz\" (UniqueName: \"kubernetes.io/projected/f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c-kube-api-access-pccqz\") pod \"infra-operator-index-6cpr7\" (UID: \"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c\") " pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:46 crc kubenswrapper[4687]: I0131 07:02:46.025311 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pccqz\" (UniqueName: \"kubernetes.io/projected/f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c-kube-api-access-pccqz\") pod \"infra-operator-index-6cpr7\" (UID: \"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c\") " pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:46 crc kubenswrapper[4687]: I0131 07:02:46.135267 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:46 crc kubenswrapper[4687]: I0131 07:02:46.549954 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-6cpr7"] Jan 31 07:02:47 crc kubenswrapper[4687]: I0131 07:02:47.107750 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-6cpr7" event={"ID":"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c","Type":"ContainerStarted","Data":"2a6843d6686c923b2dbcc076461f24cf7176f8eca079df83e16a0fced74fac6c"} Jan 31 07:02:48 crc kubenswrapper[4687]: I0131 07:02:48.113986 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-6cpr7" event={"ID":"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c","Type":"ContainerStarted","Data":"33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9"} Jan 31 07:02:48 crc kubenswrapper[4687]: I0131 07:02:48.128783 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-6cpr7" podStartSLOduration=1.849415144 podStartE2EDuration="3.128767376s" podCreationTimestamp="2026-01-31 07:02:45 +0000 UTC" firstStartedPulling="2026-01-31 07:02:46.559430994 +0000 UTC m=+1192.836690569" lastFinishedPulling="2026-01-31 07:02:47.838783226 +0000 UTC m=+1194.116042801" observedRunningTime="2026-01-31 07:02:48.1278172 +0000 UTC m=+1194.405076795" watchObservedRunningTime="2026-01-31 07:02:48.128767376 +0000 UTC m=+1194.406026951" Jan 31 07:02:56 crc kubenswrapper[4687]: I0131 07:02:56.135849 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:56 crc kubenswrapper[4687]: I0131 07:02:56.136518 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:56 crc kubenswrapper[4687]: I0131 07:02:56.163340 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:56 crc kubenswrapper[4687]: I0131 07:02:56.188566 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:02:58 crc kubenswrapper[4687]: I0131 07:02:58.849667 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6"] Jan 31 07:02:58 crc kubenswrapper[4687]: I0131 07:02:58.851190 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:58 crc kubenswrapper[4687]: I0131 07:02:58.853979 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9sffv" Jan 31 07:02:58 crc kubenswrapper[4687]: I0131 07:02:58.872983 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6"] Jan 31 07:02:58 crc kubenswrapper[4687]: I0131 07:02:58.973230 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:58 crc kubenswrapper[4687]: I0131 07:02:58.973496 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:58 crc kubenswrapper[4687]: I0131 07:02:58.973545 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhqt\" (UniqueName: \"kubernetes.io/projected/438b0249-f9e2-4627-91ae-313342bdd172-kube-api-access-jmhqt\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:59 crc kubenswrapper[4687]: I0131 07:02:59.074863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:59 crc kubenswrapper[4687]: I0131 07:02:59.074910 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmhqt\" (UniqueName: \"kubernetes.io/projected/438b0249-f9e2-4627-91ae-313342bdd172-kube-api-access-jmhqt\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:59 crc kubenswrapper[4687]: I0131 07:02:59.074960 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:59 crc kubenswrapper[4687]: I0131 07:02:59.075384 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:59 crc kubenswrapper[4687]: I0131 07:02:59.075522 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:59 crc kubenswrapper[4687]: I0131 07:02:59.101342 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmhqt\" (UniqueName: \"kubernetes.io/projected/438b0249-f9e2-4627-91ae-313342bdd172-kube-api-access-jmhqt\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:59 crc kubenswrapper[4687]: I0131 07:02:59.175377 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:02:59 crc kubenswrapper[4687]: I0131 07:02:59.625708 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6"] Jan 31 07:03:00 crc kubenswrapper[4687]: I0131 07:03:00.180216 4687 generic.go:334] "Generic (PLEG): container finished" podID="438b0249-f9e2-4627-91ae-313342bdd172" containerID="65d71177a50b9a069065816151af961c9cd5dd25d44a03ab695c30380b1ae4f4" exitCode=0 Jan 31 07:03:00 crc kubenswrapper[4687]: I0131 07:03:00.180262 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" event={"ID":"438b0249-f9e2-4627-91ae-313342bdd172","Type":"ContainerDied","Data":"65d71177a50b9a069065816151af961c9cd5dd25d44a03ab695c30380b1ae4f4"} Jan 31 07:03:00 crc kubenswrapper[4687]: I0131 07:03:00.181758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" event={"ID":"438b0249-f9e2-4627-91ae-313342bdd172","Type":"ContainerStarted","Data":"6f3c74f6e200b5956a2dbe4cc5751a4e637680809bb80e8082c5a211353fdcd4"} Jan 31 07:03:01 crc kubenswrapper[4687]: I0131 07:03:01.188387 4687 generic.go:334] "Generic (PLEG): container finished" podID="438b0249-f9e2-4627-91ae-313342bdd172" containerID="30413941ed0d998a434f2de78224017bf1e3e9c012db7f20228412a582b6b2be" exitCode=0 Jan 31 07:03:01 crc kubenswrapper[4687]: I0131 07:03:01.188472 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" event={"ID":"438b0249-f9e2-4627-91ae-313342bdd172","Type":"ContainerDied","Data":"30413941ed0d998a434f2de78224017bf1e3e9c012db7f20228412a582b6b2be"} Jan 31 07:03:02 crc kubenswrapper[4687]: I0131 07:03:02.197786 4687 generic.go:334] "Generic (PLEG): container finished" podID="438b0249-f9e2-4627-91ae-313342bdd172" containerID="9e70ab5a3efc6abd4f783aeeb7bb94ee1f9ec80dd3ab2d38c9f0b54ee56d021b" exitCode=0 Jan 31 07:03:02 crc kubenswrapper[4687]: I0131 07:03:02.197835 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" event={"ID":"438b0249-f9e2-4627-91ae-313342bdd172","Type":"ContainerDied","Data":"9e70ab5a3efc6abd4f783aeeb7bb94ee1f9ec80dd3ab2d38c9f0b54ee56d021b"} Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.459680 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.531073 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-bundle\") pod \"438b0249-f9e2-4627-91ae-313342bdd172\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.531466 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-util\") pod \"438b0249-f9e2-4627-91ae-313342bdd172\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.531582 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmhqt\" (UniqueName: \"kubernetes.io/projected/438b0249-f9e2-4627-91ae-313342bdd172-kube-api-access-jmhqt\") pod \"438b0249-f9e2-4627-91ae-313342bdd172\" (UID: \"438b0249-f9e2-4627-91ae-313342bdd172\") " Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.532850 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-bundle" (OuterVolumeSpecName: "bundle") pod "438b0249-f9e2-4627-91ae-313342bdd172" (UID: "438b0249-f9e2-4627-91ae-313342bdd172"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.542750 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438b0249-f9e2-4627-91ae-313342bdd172-kube-api-access-jmhqt" (OuterVolumeSpecName: "kube-api-access-jmhqt") pod "438b0249-f9e2-4627-91ae-313342bdd172" (UID: "438b0249-f9e2-4627-91ae-313342bdd172"). InnerVolumeSpecName "kube-api-access-jmhqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.554251 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-util" (OuterVolumeSpecName: "util") pod "438b0249-f9e2-4627-91ae-313342bdd172" (UID: "438b0249-f9e2-4627-91ae-313342bdd172"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.632227 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmhqt\" (UniqueName: \"kubernetes.io/projected/438b0249-f9e2-4627-91ae-313342bdd172-kube-api-access-jmhqt\") on node \"crc\" DevicePath \"\"" Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.632264 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:03:03 crc kubenswrapper[4687]: I0131 07:03:03.632273 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/438b0249-f9e2-4627-91ae-313342bdd172-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:03:04 crc kubenswrapper[4687]: I0131 07:03:04.210916 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" event={"ID":"438b0249-f9e2-4627-91ae-313342bdd172","Type":"ContainerDied","Data":"6f3c74f6e200b5956a2dbe4cc5751a4e637680809bb80e8082c5a211353fdcd4"} Jan 31 07:03:04 crc kubenswrapper[4687]: I0131 07:03:04.210982 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f3c74f6e200b5956a2dbe4cc5751a4e637680809bb80e8082c5a211353fdcd4" Jan 31 07:03:04 crc kubenswrapper[4687]: I0131 07:03:04.211001 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.391842 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf"] Jan 31 07:03:16 crc kubenswrapper[4687]: E0131 07:03:16.392546 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438b0249-f9e2-4627-91ae-313342bdd172" containerName="pull" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.392558 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="438b0249-f9e2-4627-91ae-313342bdd172" containerName="pull" Jan 31 07:03:16 crc kubenswrapper[4687]: E0131 07:03:16.392576 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438b0249-f9e2-4627-91ae-313342bdd172" containerName="extract" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.392582 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="438b0249-f9e2-4627-91ae-313342bdd172" containerName="extract" Jan 31 07:03:16 crc kubenswrapper[4687]: E0131 07:03:16.392588 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438b0249-f9e2-4627-91ae-313342bdd172" containerName="util" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.392595 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="438b0249-f9e2-4627-91ae-313342bdd172" containerName="util" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.392701 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="438b0249-f9e2-4627-91ae-313342bdd172" containerName="extract" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.393088 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.394674 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.395965 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-j5hhh" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.401432 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf"] Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.516842 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjt5\" (UniqueName: \"kubernetes.io/projected/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-kube-api-access-zgjt5\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.517174 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-webhook-cert\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.517226 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-apiservice-cert\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.617872 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-webhook-cert\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.617954 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-apiservice-cert\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.618102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjt5\" (UniqueName: \"kubernetes.io/projected/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-kube-api-access-zgjt5\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.623190 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-apiservice-cert\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.623238 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-webhook-cert\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.634756 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjt5\" (UniqueName: \"kubernetes.io/projected/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-kube-api-access-zgjt5\") pod \"infra-operator-controller-manager-64596d49b-mdfmf\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:16 crc kubenswrapper[4687]: I0131 07:03:16.709064 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:17 crc kubenswrapper[4687]: I0131 07:03:17.124935 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf"] Jan 31 07:03:17 crc kubenswrapper[4687]: I0131 07:03:17.277806 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" event={"ID":"d8461d3e-8187-48d8-bdc5-1f97545dc6d5","Type":"ContainerStarted","Data":"01e91627b9caab8674b9509c0c3754e569cac6f1765c19f055fc9a60feb103ad"} Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.405891 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.439692 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.439845 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.441595 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.442191 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.442566 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.442591 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.442666 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.442682 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.442784 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.442847 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"galera-openstack-dockercfg-wtz8g" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.443519 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"kube-root-ca.crt" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.443884 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openshift-service-ca.crt" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.444621 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config-data" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.542850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-default\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.543163 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.543192 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kolla-config\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.543208 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-operator-scripts\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.543241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4m96\" (UniqueName: \"kubernetes.io/projected/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kube-api-access-x4m96\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.543272 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-generated\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644095 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kolla-config\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644154 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-operator-scripts\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644211 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4m96\" (UniqueName: \"kubernetes.io/projected/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kube-api-access-x4m96\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644266 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-generated\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644334 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644354 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vhm\" (UniqueName: \"kubernetes.io/projected/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kube-api-access-55vhm\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644376 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-default\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644426 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644459 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644488 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644514 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644540 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfsj6\" (UniqueName: \"kubernetes.io/projected/ee3a4967-773c-4106-955e-ce3823c96169-kube-api-access-rfsj6\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644563 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644589 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee3a4967-773c-4106-955e-ce3823c96169-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644613 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kolla-config\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.644641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-default\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.645588 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.645678 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-generated\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.645823 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kolla-config\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.646092 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-default\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.646822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-operator-scripts\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.669355 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4m96\" (UniqueName: \"kubernetes.io/projected/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kube-api-access-x4m96\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.670482 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-2\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.746737 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.746927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747056 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747082 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55vhm\" (UniqueName: \"kubernetes.io/projected/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kube-api-access-55vhm\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747305 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747327 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfsj6\" (UniqueName: \"kubernetes.io/projected/ee3a4967-773c-4106-955e-ce3823c96169-kube-api-access-rfsj6\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747368 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee3a4967-773c-4106-955e-ce3823c96169-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kolla-config\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747468 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-default\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.747792 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.748511 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.748627 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee3a4967-773c-4106-955e-ce3823c96169-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.748750 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-default\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.748988 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.749086 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.750008 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kolla-config\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.750014 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.750084 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.752330 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.764321 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.767826 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfsj6\" (UniqueName: \"kubernetes.io/projected/ee3a4967-773c-4106-955e-ce3823c96169-kube-api-access-rfsj6\") pod \"openstack-galera-0\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.770843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.774672 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55vhm\" (UniqueName: \"kubernetes.io/projected/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kube-api-access-55vhm\") pod \"openstack-galera-1\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.782206 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.791675 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:18 crc kubenswrapper[4687]: I0131 07:03:18.798761 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:19 crc kubenswrapper[4687]: I0131 07:03:19.019577 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 07:03:19 crc kubenswrapper[4687]: I0131 07:03:19.272744 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 07:03:19 crc kubenswrapper[4687]: I0131 07:03:19.279388 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 07:03:20 crc kubenswrapper[4687]: I0131 07:03:20.299385 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" event={"ID":"d8461d3e-8187-48d8-bdc5-1f97545dc6d5","Type":"ContainerStarted","Data":"0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58"} Jan 31 07:03:20 crc kubenswrapper[4687]: I0131 07:03:20.300014 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:20 crc kubenswrapper[4687]: I0131 07:03:20.300497 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6","Type":"ContainerStarted","Data":"a455cab65216009dd0964f2f5140fe7682f00c9bf94612d96d740821ae51b381"} Jan 31 07:03:20 crc kubenswrapper[4687]: I0131 07:03:20.301491 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"ee3a4967-773c-4106-955e-ce3823c96169","Type":"ContainerStarted","Data":"ab860b5f9af393d4d563cdd424c16d5a1108d135096f6503aa4ffc4004fed4df"} Jan 31 07:03:20 crc kubenswrapper[4687]: I0131 07:03:20.302386 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"0e0aeef7-ccda-496c-ba2b-ca020077baf2","Type":"ContainerStarted","Data":"607e21d95b9712f768e98cf260beda4f4809b83f85ec5348f12db51d2057e720"} Jan 31 07:03:20 crc kubenswrapper[4687]: I0131 07:03:20.320075 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" podStartSLOduration=1.4634723 podStartE2EDuration="4.320057572s" podCreationTimestamp="2026-01-31 07:03:16 +0000 UTC" firstStartedPulling="2026-01-31 07:03:17.131517937 +0000 UTC m=+1223.408777522" lastFinishedPulling="2026-01-31 07:03:19.988103219 +0000 UTC m=+1226.265362794" observedRunningTime="2026-01-31 07:03:20.315295432 +0000 UTC m=+1226.592555017" watchObservedRunningTime="2026-01-31 07:03:20.320057572 +0000 UTC m=+1226.597317147" Jan 31 07:03:26 crc kubenswrapper[4687]: I0131 07:03:26.713199 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:03:29 crc kubenswrapper[4687]: I0131 07:03:29.366844 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"ee3a4967-773c-4106-955e-ce3823c96169","Type":"ContainerStarted","Data":"9e58ea79ce0d44062c43211417875f7802750794ea39a9d102294de4bf3d6c6c"} Jan 31 07:03:29 crc kubenswrapper[4687]: I0131 07:03:29.369699 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"0e0aeef7-ccda-496c-ba2b-ca020077baf2","Type":"ContainerStarted","Data":"1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84"} Jan 31 07:03:29 crc kubenswrapper[4687]: I0131 07:03:29.371927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6","Type":"ContainerStarted","Data":"b4d1d2310481ed255cf1785b3f923d2133eb8ab1ec6ca22e85e878bdb467855e"} Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.252329 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.253057 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.254835 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"memcached-memcached-dockercfg-hhgnr" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.255028 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"memcached-config-data" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.268637 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.319061 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kolla-config\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.319145 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvlc7\" (UniqueName: \"kubernetes.io/projected/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kube-api-access-nvlc7\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.319178 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-config-data\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.420897 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kolla-config\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.420992 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvlc7\" (UniqueName: \"kubernetes.io/projected/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kube-api-access-nvlc7\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.421032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-config-data\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.422017 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-config-data\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.422054 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kolla-config\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.441102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvlc7\" (UniqueName: \"kubernetes.io/projected/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kube-api-access-nvlc7\") pod \"memcached-0\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.570356 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:30 crc kubenswrapper[4687]: I0131 07:03:30.791573 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 07:03:31 crc kubenswrapper[4687]: I0131 07:03:31.383509 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8","Type":"ContainerStarted","Data":"c83acd8175d29811791030a8ff2b871abd4624afb9f8c503e0de3353544a4a54"} Jan 31 07:03:32 crc kubenswrapper[4687]: I0131 07:03:32.810201 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-dk9wm"] Jan 31 07:03:32 crc kubenswrapper[4687]: I0131 07:03:32.811609 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" Jan 31 07:03:32 crc kubenswrapper[4687]: I0131 07:03:32.814763 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-77tgf" Jan 31 07:03:32 crc kubenswrapper[4687]: I0131 07:03:32.827443 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-dk9wm"] Jan 31 07:03:32 crc kubenswrapper[4687]: I0131 07:03:32.953450 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2f4j\" (UniqueName: \"kubernetes.io/projected/4a0d6f56-70da-4e04-a580-efa274d918c1-kube-api-access-c2f4j\") pod \"rabbitmq-cluster-operator-index-dk9wm\" (UID: \"4a0d6f56-70da-4e04-a580-efa274d918c1\") " pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.054839 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2f4j\" (UniqueName: \"kubernetes.io/projected/4a0d6f56-70da-4e04-a580-efa274d918c1-kube-api-access-c2f4j\") pod \"rabbitmq-cluster-operator-index-dk9wm\" (UID: \"4a0d6f56-70da-4e04-a580-efa274d918c1\") " pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.076176 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2f4j\" (UniqueName: \"kubernetes.io/projected/4a0d6f56-70da-4e04-a580-efa274d918c1-kube-api-access-c2f4j\") pod \"rabbitmq-cluster-operator-index-dk9wm\" (UID: \"4a0d6f56-70da-4e04-a580-efa274d918c1\") " pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.129474 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.395088 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee3a4967-773c-4106-955e-ce3823c96169" containerID="9e58ea79ce0d44062c43211417875f7802750794ea39a9d102294de4bf3d6c6c" exitCode=0 Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.395125 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"ee3a4967-773c-4106-955e-ce3823c96169","Type":"ContainerDied","Data":"9e58ea79ce0d44062c43211417875f7802750794ea39a9d102294de4bf3d6c6c"} Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.397824 4687 generic.go:334] "Generic (PLEG): container finished" podID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerID="1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84" exitCode=0 Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.397879 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"0e0aeef7-ccda-496c-ba2b-ca020077baf2","Type":"ContainerDied","Data":"1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84"} Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.403847 4687 generic.go:334] "Generic (PLEG): container finished" podID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerID="b4d1d2310481ed255cf1785b3f923d2133eb8ab1ec6ca22e85e878bdb467855e" exitCode=0 Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.403904 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6","Type":"ContainerDied","Data":"b4d1d2310481ed255cf1785b3f923d2133eb8ab1ec6ca22e85e878bdb467855e"} Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.409103 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8","Type":"ContainerStarted","Data":"e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb"} Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.409258 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.457802 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/memcached-0" podStartSLOduration=1.6938745160000002 podStartE2EDuration="3.45778669s" podCreationTimestamp="2026-01-31 07:03:30 +0000 UTC" firstStartedPulling="2026-01-31 07:03:30.797966657 +0000 UTC m=+1237.075226232" lastFinishedPulling="2026-01-31 07:03:32.561878821 +0000 UTC m=+1238.839138406" observedRunningTime="2026-01-31 07:03:33.453217895 +0000 UTC m=+1239.730477480" watchObservedRunningTime="2026-01-31 07:03:33.45778669 +0000 UTC m=+1239.735046265" Jan 31 07:03:33 crc kubenswrapper[4687]: I0131 07:03:33.638358 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-dk9wm"] Jan 31 07:03:33 crc kubenswrapper[4687]: W0131 07:03:33.732647 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a0d6f56_70da_4e04_a580_efa274d918c1.slice/crio-200bff6cbe10aa37d308ec89febc6335d6d4dbbf8d5a852375639b0f7098fe60 WatchSource:0}: Error finding container 200bff6cbe10aa37d308ec89febc6335d6d4dbbf8d5a852375639b0f7098fe60: Status 404 returned error can't find the container with id 200bff6cbe10aa37d308ec89febc6335d6d4dbbf8d5a852375639b0f7098fe60 Jan 31 07:03:34 crc kubenswrapper[4687]: I0131 07:03:34.420294 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"ee3a4967-773c-4106-955e-ce3823c96169","Type":"ContainerStarted","Data":"55e095bf402d4decfeb0d7eab9463616f714666ced8929276007bd2c6f82ed79"} Jan 31 07:03:34 crc kubenswrapper[4687]: I0131 07:03:34.423333 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"0e0aeef7-ccda-496c-ba2b-ca020077baf2","Type":"ContainerStarted","Data":"a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6"} Jan 31 07:03:34 crc kubenswrapper[4687]: I0131 07:03:34.424908 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" event={"ID":"4a0d6f56-70da-4e04-a580-efa274d918c1","Type":"ContainerStarted","Data":"200bff6cbe10aa37d308ec89febc6335d6d4dbbf8d5a852375639b0f7098fe60"} Jan 31 07:03:34 crc kubenswrapper[4687]: I0131 07:03:34.426608 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6","Type":"ContainerStarted","Data":"c043d3184ab54a35d1e0f449d503797f83fe59efcc6761224fdebfe2d46a168b"} Jan 31 07:03:34 crc kubenswrapper[4687]: I0131 07:03:34.444922 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-0" podStartSLOduration=9.111322497 podStartE2EDuration="17.444899556s" podCreationTimestamp="2026-01-31 07:03:17 +0000 UTC" firstStartedPulling="2026-01-31 07:03:19.795910246 +0000 UTC m=+1226.073169821" lastFinishedPulling="2026-01-31 07:03:28.129487295 +0000 UTC m=+1234.406746880" observedRunningTime="2026-01-31 07:03:34.439479177 +0000 UTC m=+1240.716738752" watchObservedRunningTime="2026-01-31 07:03:34.444899556 +0000 UTC m=+1240.722159141" Jan 31 07:03:34 crc kubenswrapper[4687]: I0131 07:03:34.462476 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-1" podStartSLOduration=9.081363548 podStartE2EDuration="17.462457116s" podCreationTimestamp="2026-01-31 07:03:17 +0000 UTC" firstStartedPulling="2026-01-31 07:03:19.804619116 +0000 UTC m=+1226.081878691" lastFinishedPulling="2026-01-31 07:03:28.185712684 +0000 UTC m=+1234.462972259" observedRunningTime="2026-01-31 07:03:34.458494888 +0000 UTC m=+1240.735754473" watchObservedRunningTime="2026-01-31 07:03:34.462457116 +0000 UTC m=+1240.739716691" Jan 31 07:03:34 crc kubenswrapper[4687]: I0131 07:03:34.491943 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-2" podStartSLOduration=9.153192604000001 podStartE2EDuration="17.491924083s" podCreationTimestamp="2026-01-31 07:03:17 +0000 UTC" firstStartedPulling="2026-01-31 07:03:19.797626114 +0000 UTC m=+1226.074885689" lastFinishedPulling="2026-01-31 07:03:28.136357593 +0000 UTC m=+1234.413617168" observedRunningTime="2026-01-31 07:03:34.490903365 +0000 UTC m=+1240.768162940" watchObservedRunningTime="2026-01-31 07:03:34.491924083 +0000 UTC m=+1240.769183658" Jan 31 07:03:37 crc kubenswrapper[4687]: I0131 07:03:37.001494 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-dk9wm"] Jan 31 07:03:37 crc kubenswrapper[4687]: I0131 07:03:37.611930 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-vm9f7"] Jan 31 07:03:37 crc kubenswrapper[4687]: I0131 07:03:37.613057 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:37 crc kubenswrapper[4687]: I0131 07:03:37.624592 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-vm9f7"] Jan 31 07:03:37 crc kubenswrapper[4687]: I0131 07:03:37.730716 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjp5l\" (UniqueName: \"kubernetes.io/projected/a46e651b-24d0-42a5-8b48-06a4d92da4ba-kube-api-access-xjp5l\") pod \"rabbitmq-cluster-operator-index-vm9f7\" (UID: \"a46e651b-24d0-42a5-8b48-06a4d92da4ba\") " pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:37 crc kubenswrapper[4687]: I0131 07:03:37.831896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjp5l\" (UniqueName: \"kubernetes.io/projected/a46e651b-24d0-42a5-8b48-06a4d92da4ba-kube-api-access-xjp5l\") pod \"rabbitmq-cluster-operator-index-vm9f7\" (UID: \"a46e651b-24d0-42a5-8b48-06a4d92da4ba\") " pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:37 crc kubenswrapper[4687]: I0131 07:03:37.861475 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjp5l\" (UniqueName: \"kubernetes.io/projected/a46e651b-24d0-42a5-8b48-06a4d92da4ba-kube-api-access-xjp5l\") pod \"rabbitmq-cluster-operator-index-vm9f7\" (UID: \"a46e651b-24d0-42a5-8b48-06a4d92da4ba\") " pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:37 crc kubenswrapper[4687]: I0131 07:03:37.935877 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:38 crc kubenswrapper[4687]: I0131 07:03:38.783582 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:38 crc kubenswrapper[4687]: I0131 07:03:38.784520 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:38 crc kubenswrapper[4687]: I0131 07:03:38.792396 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:38 crc kubenswrapper[4687]: I0131 07:03:38.792742 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:03:38 crc kubenswrapper[4687]: I0131 07:03:38.799630 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:38 crc kubenswrapper[4687]: I0131 07:03:38.799839 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:39 crc kubenswrapper[4687]: I0131 07:03:39.298142 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-vm9f7"] Jan 31 07:03:40 crc kubenswrapper[4687]: I0131 07:03:40.466900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" event={"ID":"a46e651b-24d0-42a5-8b48-06a4d92da4ba","Type":"ContainerStarted","Data":"8af2ad8b871f4bc94135054c239c090b59a6905ef5ed49395da95de323cca6da"} Jan 31 07:03:40 crc kubenswrapper[4687]: I0131 07:03:40.577671 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/memcached-0" Jan 31 07:03:40 crc kubenswrapper[4687]: E0131 07:03:40.748867 4687 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.23:60344->38.102.83.23:45455: write tcp 192.168.126.11:10250->192.168.126.11:57460: write: broken pipe Jan 31 07:03:40 crc kubenswrapper[4687]: E0131 07:03:40.751938 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.23:60344->38.102.83.23:45455: write tcp 38.102.83.23:60344->38.102.83.23:45455: write: broken pipe Jan 31 07:03:41 crc kubenswrapper[4687]: I0131 07:03:41.474884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" event={"ID":"4a0d6f56-70da-4e04-a580-efa274d918c1","Type":"ContainerStarted","Data":"cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9"} Jan 31 07:03:41 crc kubenswrapper[4687]: I0131 07:03:41.474976 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" podUID="4a0d6f56-70da-4e04-a580-efa274d918c1" containerName="registry-server" containerID="cri-o://cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9" gracePeriod=2 Jan 31 07:03:41 crc kubenswrapper[4687]: I0131 07:03:41.478434 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" event={"ID":"a46e651b-24d0-42a5-8b48-06a4d92da4ba","Type":"ContainerStarted","Data":"a4483730da4be1a3e88a3bbcfedc40262c125684356d31e04b335d704ff66a23"} Jan 31 07:03:41 crc kubenswrapper[4687]: I0131 07:03:41.499128 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" podStartSLOduration=2.5668570429999997 podStartE2EDuration="9.499107954s" podCreationTimestamp="2026-01-31 07:03:32 +0000 UTC" firstStartedPulling="2026-01-31 07:03:33.734980766 +0000 UTC m=+1240.012240341" lastFinishedPulling="2026-01-31 07:03:40.667231677 +0000 UTC m=+1246.944491252" observedRunningTime="2026-01-31 07:03:41.495535507 +0000 UTC m=+1247.772795092" watchObservedRunningTime="2026-01-31 07:03:41.499107954 +0000 UTC m=+1247.776367529" Jan 31 07:03:41 crc kubenswrapper[4687]: I0131 07:03:41.512807 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" podStartSLOduration=3.343097877 podStartE2EDuration="4.512783129s" podCreationTimestamp="2026-01-31 07:03:37 +0000 UTC" firstStartedPulling="2026-01-31 07:03:39.497531875 +0000 UTC m=+1245.774791450" lastFinishedPulling="2026-01-31 07:03:40.667217127 +0000 UTC m=+1246.944476702" observedRunningTime="2026-01-31 07:03:41.510741443 +0000 UTC m=+1247.788001018" watchObservedRunningTime="2026-01-31 07:03:41.512783129 +0000 UTC m=+1247.790042704" Jan 31 07:03:41 crc kubenswrapper[4687]: I0131 07:03:41.987513 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.140742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2f4j\" (UniqueName: \"kubernetes.io/projected/4a0d6f56-70da-4e04-a580-efa274d918c1-kube-api-access-c2f4j\") pod \"4a0d6f56-70da-4e04-a580-efa274d918c1\" (UID: \"4a0d6f56-70da-4e04-a580-efa274d918c1\") " Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.146214 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0d6f56-70da-4e04-a580-efa274d918c1-kube-api-access-c2f4j" (OuterVolumeSpecName: "kube-api-access-c2f4j") pod "4a0d6f56-70da-4e04-a580-efa274d918c1" (UID: "4a0d6f56-70da-4e04-a580-efa274d918c1"). InnerVolumeSpecName "kube-api-access-c2f4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.242260 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2f4j\" (UniqueName: \"kubernetes.io/projected/4a0d6f56-70da-4e04-a580-efa274d918c1-kube-api-access-c2f4j\") on node \"crc\" DevicePath \"\"" Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.485025 4687 generic.go:334] "Generic (PLEG): container finished" podID="4a0d6f56-70da-4e04-a580-efa274d918c1" containerID="cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9" exitCode=0 Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.486167 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.490498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" event={"ID":"4a0d6f56-70da-4e04-a580-efa274d918c1","Type":"ContainerDied","Data":"cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9"} Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.490558 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-dk9wm" event={"ID":"4a0d6f56-70da-4e04-a580-efa274d918c1","Type":"ContainerDied","Data":"200bff6cbe10aa37d308ec89febc6335d6d4dbbf8d5a852375639b0f7098fe60"} Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.490580 4687 scope.go:117] "RemoveContainer" containerID="cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9" Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.506509 4687 scope.go:117] "RemoveContainer" containerID="cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9" Jan 31 07:03:42 crc kubenswrapper[4687]: E0131 07:03:42.506965 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9\": container with ID starting with cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9 not found: ID does not exist" containerID="cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9" Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.507000 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9"} err="failed to get container status \"cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9\": rpc error: code = NotFound desc = could not find container \"cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9\": container with ID starting with cfdf95515e928ac5d0a57c00f7149830aec8eb7c3084b3a711968c3008b198d9 not found: ID does not exist" Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.512561 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-dk9wm"] Jan 31 07:03:42 crc kubenswrapper[4687]: I0131 07:03:42.516257 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-dk9wm"] Jan 31 07:03:43 crc kubenswrapper[4687]: I0131 07:03:43.158355 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:43 crc kubenswrapper[4687]: I0131 07:03:43.239434 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:03:43 crc kubenswrapper[4687]: I0131 07:03:43.610866 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a0d6f56-70da-4e04-a580-efa274d918c1" path="/var/lib/kubelet/pods/4a0d6f56-70da-4e04-a580-efa274d918c1/volumes" Jan 31 07:03:43 crc kubenswrapper[4687]: E0131 07:03:43.879150 4687 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.23:60430->38.102.83.23:45455: read tcp 38.102.83.23:60430->38.102.83.23:45455: read: connection reset by peer Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.455585 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/root-account-create-update-gr7cz"] Jan 31 07:03:47 crc kubenswrapper[4687]: E0131 07:03:47.456312 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0d6f56-70da-4e04-a580-efa274d918c1" containerName="registry-server" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.456324 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0d6f56-70da-4e04-a580-efa274d918c1" containerName="registry-server" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.456438 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0d6f56-70da-4e04-a580-efa274d918c1" containerName="registry-server" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.456875 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.461352 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.523655 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/root-account-create-update-gr7cz"] Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.611609 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw86q\" (UniqueName: \"kubernetes.io/projected/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-kube-api-access-sw86q\") pod \"root-account-create-update-gr7cz\" (UID: \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\") " pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.611652 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-operator-scripts\") pod \"root-account-create-update-gr7cz\" (UID: \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\") " pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.713400 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw86q\" (UniqueName: \"kubernetes.io/projected/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-kube-api-access-sw86q\") pod \"root-account-create-update-gr7cz\" (UID: \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\") " pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.713462 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-operator-scripts\") pod \"root-account-create-update-gr7cz\" (UID: \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\") " pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.714301 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-operator-scripts\") pod \"root-account-create-update-gr7cz\" (UID: \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\") " pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.736593 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw86q\" (UniqueName: \"kubernetes.io/projected/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-kube-api-access-sw86q\") pod \"root-account-create-update-gr7cz\" (UID: \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\") " pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.780287 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.937029 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.937648 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:47 crc kubenswrapper[4687]: I0131 07:03:47.967801 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:48 crc kubenswrapper[4687]: I0131 07:03:48.549900 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:03:48 crc kubenswrapper[4687]: I0131 07:03:48.843666 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/openstack-galera-2" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerName="galera" probeResult="failure" output=< Jan 31 07:03:48 crc kubenswrapper[4687]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 31 07:03:48 crc kubenswrapper[4687]: > Jan 31 07:03:50 crc kubenswrapper[4687]: I0131 07:03:50.103050 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/root-account-create-update-gr7cz"] Jan 31 07:03:50 crc kubenswrapper[4687]: W0131 07:03:50.108913 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf30c8a06_e4ce_4647_aec5_e2cdbd4c04c6.slice/crio-e948319be1d744ea574dd980f00bf2552699a2de3982c55c6ea03d80a6533540 WatchSource:0}: Error finding container e948319be1d744ea574dd980f00bf2552699a2de3982c55c6ea03d80a6533540: Status 404 returned error can't find the container with id e948319be1d744ea574dd980f00bf2552699a2de3982c55c6ea03d80a6533540 Jan 31 07:03:50 crc kubenswrapper[4687]: I0131 07:03:50.539756 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/root-account-create-update-gr7cz" event={"ID":"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6","Type":"ContainerStarted","Data":"e948319be1d744ea574dd980f00bf2552699a2de3982c55c6ea03d80a6533540"} Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.665556 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk"] Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.667375 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.669478 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9sffv" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.681864 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk"] Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.762800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.762861 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbpfh\" (UniqueName: \"kubernetes.io/projected/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-kube-api-access-sbpfh\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.762922 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.864592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbpfh\" (UniqueName: \"kubernetes.io/projected/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-kube-api-access-sbpfh\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.864690 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.864775 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.865343 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.865398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.880734 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbpfh\" (UniqueName: \"kubernetes.io/projected/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-kube-api-access-sbpfh\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:51 crc kubenswrapper[4687]: I0131 07:03:51.982089 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:52 crc kubenswrapper[4687]: W0131 07:03:52.347167 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf47f9f6_c1ba_43ec_be66_a9aa4ca4afc7.slice/crio-144eccf6a6a9e11c5d78da339f8c242930b352eee6cb077bae0e37508f1ca02d WatchSource:0}: Error finding container 144eccf6a6a9e11c5d78da339f8c242930b352eee6cb077bae0e37508f1ca02d: Status 404 returned error can't find the container with id 144eccf6a6a9e11c5d78da339f8c242930b352eee6cb077bae0e37508f1ca02d Jan 31 07:03:52 crc kubenswrapper[4687]: I0131 07:03:52.357207 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk"] Jan 31 07:03:52 crc kubenswrapper[4687]: I0131 07:03:52.551677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" event={"ID":"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7","Type":"ContainerStarted","Data":"144eccf6a6a9e11c5d78da339f8c242930b352eee6cb077bae0e37508f1ca02d"} Jan 31 07:03:53 crc kubenswrapper[4687]: E0131 07:03:53.202419 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.23:51534->38.102.83.23:45455: write tcp 38.102.83.23:51534->38.102.83.23:45455: write: broken pipe Jan 31 07:03:53 crc kubenswrapper[4687]: I0131 07:03:53.560397 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/root-account-create-update-gr7cz" event={"ID":"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6","Type":"ContainerStarted","Data":"0e78e6ac18d5619d5c826f399b2ce819b7345ab20fe6f9a27a73c7ce49ea50b0"} Jan 31 07:03:53 crc kubenswrapper[4687]: I0131 07:03:53.563313 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerID="a3fdf27497e89e3ded758842f01e975fbc68d28dac4c38c41f91d62c5d4bab96" exitCode=0 Jan 31 07:03:53 crc kubenswrapper[4687]: I0131 07:03:53.563368 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" event={"ID":"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7","Type":"ContainerDied","Data":"a3fdf27497e89e3ded758842f01e975fbc68d28dac4c38c41f91d62c5d4bab96"} Jan 31 07:03:53 crc kubenswrapper[4687]: I0131 07:03:53.578234 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/root-account-create-update-gr7cz" podStartSLOduration=6.578215372 podStartE2EDuration="6.578215372s" podCreationTimestamp="2026-01-31 07:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:03:53.576026903 +0000 UTC m=+1259.853286468" watchObservedRunningTime="2026-01-31 07:03:53.578215372 +0000 UTC m=+1259.855474937" Jan 31 07:03:55 crc kubenswrapper[4687]: I0131 07:03:55.588343 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" event={"ID":"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7","Type":"ContainerStarted","Data":"d625eec42cf19dd45cd0c28cc2ca9d21b9dbcd13f3cd3629aeb6dd37a654d22d"} Jan 31 07:03:56 crc kubenswrapper[4687]: I0131 07:03:56.595388 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerID="d625eec42cf19dd45cd0c28cc2ca9d21b9dbcd13f3cd3629aeb6dd37a654d22d" exitCode=0 Jan 31 07:03:56 crc kubenswrapper[4687]: I0131 07:03:56.595595 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" event={"ID":"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7","Type":"ContainerDied","Data":"d625eec42cf19dd45cd0c28cc2ca9d21b9dbcd13f3cd3629aeb6dd37a654d22d"} Jan 31 07:03:57 crc kubenswrapper[4687]: I0131 07:03:57.603629 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerID="9e6de3dca85b2c8d2b5004d500c0909275a4a8ed86e5d6c234667a84700b4556" exitCode=0 Jan 31 07:03:57 crc kubenswrapper[4687]: I0131 07:03:57.610787 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" event={"ID":"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7","Type":"ContainerDied","Data":"9e6de3dca85b2c8d2b5004d500c0909275a4a8ed86e5d6c234667a84700b4556"} Jan 31 07:03:58 crc kubenswrapper[4687]: I0131 07:03:58.919492 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/openstack-galera-2" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerName="galera" probeResult="failure" output=< Jan 31 07:03:58 crc kubenswrapper[4687]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 31 07:03:58 crc kubenswrapper[4687]: > Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.136173 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.296855 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-bundle\") pod \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.297039 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbpfh\" (UniqueName: \"kubernetes.io/projected/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-kube-api-access-sbpfh\") pod \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.297069 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-util\") pod \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\" (UID: \"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7\") " Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.297902 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-bundle" (OuterVolumeSpecName: "bundle") pod "cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" (UID: "cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.304155 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-kube-api-access-sbpfh" (OuterVolumeSpecName: "kube-api-access-sbpfh") pod "cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" (UID: "cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7"). InnerVolumeSpecName "kube-api-access-sbpfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.312783 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-util" (OuterVolumeSpecName: "util") pod "cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" (UID: "cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.398627 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbpfh\" (UniqueName: \"kubernetes.io/projected/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-kube-api-access-sbpfh\") on node \"crc\" DevicePath \"\"" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.398670 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.398685 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.571918 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.619352 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" event={"ID":"cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7","Type":"ContainerDied","Data":"144eccf6a6a9e11c5d78da339f8c242930b352eee6cb077bae0e37508f1ca02d"} Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.619805 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="144eccf6a6a9e11c5d78da339f8c242930b352eee6cb077bae0e37508f1ca02d" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.619425 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk" Jan 31 07:03:59 crc kubenswrapper[4687]: I0131 07:03:59.641049 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:04:01 crc kubenswrapper[4687]: I0131 07:04:01.637447 4687 generic.go:334] "Generic (PLEG): container finished" podID="f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6" containerID="0e78e6ac18d5619d5c826f399b2ce819b7345ab20fe6f9a27a73c7ce49ea50b0" exitCode=0 Jan 31 07:04:01 crc kubenswrapper[4687]: I0131 07:04:01.637550 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/root-account-create-update-gr7cz" event={"ID":"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6","Type":"ContainerDied","Data":"0e78e6ac18d5619d5c826f399b2ce819b7345ab20fe6f9a27a73c7ce49ea50b0"} Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.107988 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.248386 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-operator-scripts\") pod \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\" (UID: \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\") " Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.248510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw86q\" (UniqueName: \"kubernetes.io/projected/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-kube-api-access-sw86q\") pod \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\" (UID: \"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6\") " Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.249188 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6" (UID: "f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.253457 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-kube-api-access-sw86q" (OuterVolumeSpecName: "kube-api-access-sw86q") pod "f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6" (UID: "f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6"). InnerVolumeSpecName "kube-api-access-sw86q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.349908 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.349946 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw86q\" (UniqueName: \"kubernetes.io/projected/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6-kube-api-access-sw86q\") on node \"crc\" DevicePath \"\"" Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.653978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/root-account-create-update-gr7cz" event={"ID":"f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6","Type":"ContainerDied","Data":"e948319be1d744ea574dd980f00bf2552699a2de3982c55c6ea03d80a6533540"} Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.654304 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e948319be1d744ea574dd980f00bf2552699a2de3982c55c6ea03d80a6533540" Jan 31 07:04:03 crc kubenswrapper[4687]: I0131 07:04:03.654149 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-gr7cz" Jan 31 07:04:05 crc kubenswrapper[4687]: I0131 07:04:05.723227 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:04:05 crc kubenswrapper[4687]: I0131 07:04:05.846324 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.982476 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh"] Jan 31 07:04:06 crc kubenswrapper[4687]: E0131 07:04:06.982781 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6" containerName="mariadb-account-create-update" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.982799 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6" containerName="mariadb-account-create-update" Jan 31 07:04:06 crc kubenswrapper[4687]: E0131 07:04:06.982816 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerName="extract" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.982826 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerName="extract" Jan 31 07:04:06 crc kubenswrapper[4687]: E0131 07:04:06.982840 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerName="util" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.982847 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerName="util" Jan 31 07:04:06 crc kubenswrapper[4687]: E0131 07:04:06.982865 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerName="pull" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.982873 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerName="pull" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.983003 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6" containerName="mariadb-account-create-update" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.983026 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" containerName="extract" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.983562 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.990367 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-8jmbg" Jan 31 07:04:06 crc kubenswrapper[4687]: I0131 07:04:06.995875 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh"] Jan 31 07:04:07 crc kubenswrapper[4687]: I0131 07:04:07.033735 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsn6j\" (UniqueName: \"kubernetes.io/projected/160706b4-005d-446d-a925-3849ab49f621-kube-api-access-qsn6j\") pod \"rabbitmq-cluster-operator-779fc9694b-n25lh\" (UID: \"160706b4-005d-446d-a925-3849ab49f621\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" Jan 31 07:04:07 crc kubenswrapper[4687]: I0131 07:04:07.135027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsn6j\" (UniqueName: \"kubernetes.io/projected/160706b4-005d-446d-a925-3849ab49f621-kube-api-access-qsn6j\") pod \"rabbitmq-cluster-operator-779fc9694b-n25lh\" (UID: \"160706b4-005d-446d-a925-3849ab49f621\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" Jan 31 07:04:07 crc kubenswrapper[4687]: I0131 07:04:07.177390 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsn6j\" (UniqueName: \"kubernetes.io/projected/160706b4-005d-446d-a925-3849ab49f621-kube-api-access-qsn6j\") pod \"rabbitmq-cluster-operator-779fc9694b-n25lh\" (UID: \"160706b4-005d-446d-a925-3849ab49f621\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" Jan 31 07:04:07 crc kubenswrapper[4687]: I0131 07:04:07.310891 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" Jan 31 07:04:07 crc kubenswrapper[4687]: I0131 07:04:07.766922 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh"] Jan 31 07:04:08 crc kubenswrapper[4687]: I0131 07:04:08.714890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" event={"ID":"160706b4-005d-446d-a925-3849ab49f621","Type":"ContainerStarted","Data":"35c73975af7c5becfe2593054c247b3f532741ffa35c59349c0d238764b25ffa"} Jan 31 07:04:14 crc kubenswrapper[4687]: I0131 07:04:14.766893 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" event={"ID":"160706b4-005d-446d-a925-3849ab49f621","Type":"ContainerStarted","Data":"64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3"} Jan 31 07:04:14 crc kubenswrapper[4687]: I0131 07:04:14.786241 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" podStartSLOduration=2.841735392 podStartE2EDuration="8.786220391s" podCreationTimestamp="2026-01-31 07:04:06 +0000 UTC" firstStartedPulling="2026-01-31 07:04:07.775198144 +0000 UTC m=+1274.052457719" lastFinishedPulling="2026-01-31 07:04:13.719683123 +0000 UTC m=+1279.996942718" observedRunningTime="2026-01-31 07:04:14.78180193 +0000 UTC m=+1281.059061505" watchObservedRunningTime="2026-01-31 07:04:14.786220391 +0000 UTC m=+1281.063479976" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.409444 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.413369 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.415675 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-default-user" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.418216 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-server-conf" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.418490 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-server-dockercfg-7f45q" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.418723 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-plugins-conf" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.421335 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.424173 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.590727 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33674fdf-dc91-46fd-a4d5-795ff7fd4211-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.591123 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.591250 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33674fdf-dc91-46fd-a4d5-795ff7fd4211-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.591448 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.591540 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.591636 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33674fdf-dc91-46fd-a4d5-795ff7fd4211-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.591705 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.591787 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp6c5\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-kube-api-access-rp6c5\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.692806 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp6c5\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-kube-api-access-rp6c5\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.692875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33674fdf-dc91-46fd-a4d5-795ff7fd4211-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.692942 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.692966 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33674fdf-dc91-46fd-a4d5-795ff7fd4211-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.693000 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.693030 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.693071 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33674fdf-dc91-46fd-a4d5-795ff7fd4211-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.693103 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.693546 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.693730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.694353 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33674fdf-dc91-46fd-a4d5-795ff7fd4211-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.696072 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.696185 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fffea4a3e896eb10ddb125df9132cad7fc4f363846b4d76c81215c485878088d/globalmount\"" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.699114 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33674fdf-dc91-46fd-a4d5-795ff7fd4211-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.699623 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33674fdf-dc91-46fd-a4d5-795ff7fd4211-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.700525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.717742 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.730528 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp6c5\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-kube-api-access-rp6c5\") pod \"rabbitmq-server-0\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:17 crc kubenswrapper[4687]: I0131 07:04:17.846144 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:04:18 crc kubenswrapper[4687]: I0131 07:04:18.599578 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 07:04:18 crc kubenswrapper[4687]: W0131 07:04:18.615399 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33674fdf_dc91_46fd_a4d5_795ff7fd4211.slice/crio-b1d4679aa9cc40b8243af78fe84abc6bdb057cea1fe0720a3b62e6f4b727d447 WatchSource:0}: Error finding container b1d4679aa9cc40b8243af78fe84abc6bdb057cea1fe0720a3b62e6f4b727d447: Status 404 returned error can't find the container with id b1d4679aa9cc40b8243af78fe84abc6bdb057cea1fe0720a3b62e6f4b727d447 Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.010760 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-l54rp"] Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.011691 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.013810 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-7dvgm" Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.021618 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-l54rp"] Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.178882 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd42f\" (UniqueName: \"kubernetes.io/projected/18442ead-5a1c-4a1c-bb4d-fddf9434b284-kube-api-access-dd42f\") pod \"keystone-operator-index-l54rp\" (UID: \"18442ead-5a1c-4a1c-bb4d-fddf9434b284\") " pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.180632 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"33674fdf-dc91-46fd-a4d5-795ff7fd4211","Type":"ContainerStarted","Data":"b1d4679aa9cc40b8243af78fe84abc6bdb057cea1fe0720a3b62e6f4b727d447"} Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.279768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd42f\" (UniqueName: \"kubernetes.io/projected/18442ead-5a1c-4a1c-bb4d-fddf9434b284-kube-api-access-dd42f\") pod \"keystone-operator-index-l54rp\" (UID: \"18442ead-5a1c-4a1c-bb4d-fddf9434b284\") " pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.307801 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd42f\" (UniqueName: \"kubernetes.io/projected/18442ead-5a1c-4a1c-bb4d-fddf9434b284-kube-api-access-dd42f\") pod \"keystone-operator-index-l54rp\" (UID: \"18442ead-5a1c-4a1c-bb4d-fddf9434b284\") " pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.331518 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:19 crc kubenswrapper[4687]: I0131 07:04:19.721740 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-l54rp"] Jan 31 07:04:20 crc kubenswrapper[4687]: I0131 07:04:20.189822 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-l54rp" event={"ID":"18442ead-5a1c-4a1c-bb4d-fddf9434b284","Type":"ContainerStarted","Data":"3d517b5368af819299c6ca7c3bbbc68690869d0eb5807688740c8f3d5794c18f"} Jan 31 07:04:28 crc kubenswrapper[4687]: I0131 07:04:28.684700 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:04:28 crc kubenswrapper[4687]: I0131 07:04:28.685920 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:04:31 crc kubenswrapper[4687]: I0131 07:04:31.272712 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"33674fdf-dc91-46fd-a4d5-795ff7fd4211","Type":"ContainerStarted","Data":"703c0d772a929eebfafa746449afc703a9975ddbf680361a13ce0ddeaea5d41f"} Jan 31 07:04:31 crc kubenswrapper[4687]: I0131 07:04:31.274834 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-l54rp" event={"ID":"18442ead-5a1c-4a1c-bb4d-fddf9434b284","Type":"ContainerStarted","Data":"419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259"} Jan 31 07:04:31 crc kubenswrapper[4687]: I0131 07:04:31.309266 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-l54rp" podStartSLOduration=3.367350073 podStartE2EDuration="13.309249177s" podCreationTimestamp="2026-01-31 07:04:18 +0000 UTC" firstStartedPulling="2026-01-31 07:04:19.710397066 +0000 UTC m=+1285.987656631" lastFinishedPulling="2026-01-31 07:04:29.65229616 +0000 UTC m=+1295.929555735" observedRunningTime="2026-01-31 07:04:31.30750489 +0000 UTC m=+1297.584764465" watchObservedRunningTime="2026-01-31 07:04:31.309249177 +0000 UTC m=+1297.586508752" Jan 31 07:04:39 crc kubenswrapper[4687]: I0131 07:04:39.332372 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:39 crc kubenswrapper[4687]: I0131 07:04:39.332965 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:39 crc kubenswrapper[4687]: I0131 07:04:39.354010 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:40 crc kubenswrapper[4687]: I0131 07:04:40.365399 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.251721 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v"] Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.253576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.255237 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9sffv" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.262230 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v"] Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.433376 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.433447 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rq8r\" (UniqueName: \"kubernetes.io/projected/4e5d0709-195f-4511-897c-0dd7d15b5275-kube-api-access-6rq8r\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.433501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.534882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rq8r\" (UniqueName: \"kubernetes.io/projected/4e5d0709-195f-4511-897c-0dd7d15b5275-kube-api-access-6rq8r\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.535031 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.535155 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.535879 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.535913 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.552459 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rq8r\" (UniqueName: \"kubernetes.io/projected/4e5d0709-195f-4511-897c-0dd7d15b5275-kube-api-access-6rq8r\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.574263 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:49 crc kubenswrapper[4687]: I0131 07:04:49.789400 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v"] Jan 31 07:04:50 crc kubenswrapper[4687]: I0131 07:04:50.433817 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerID="e31853d1171bb667a8c8d62c8006125f8b1f7f1227d797a963b974c8980cc85c" exitCode=0 Jan 31 07:04:50 crc kubenswrapper[4687]: I0131 07:04:50.434120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" event={"ID":"4e5d0709-195f-4511-897c-0dd7d15b5275","Type":"ContainerDied","Data":"e31853d1171bb667a8c8d62c8006125f8b1f7f1227d797a963b974c8980cc85c"} Jan 31 07:04:50 crc kubenswrapper[4687]: I0131 07:04:50.434150 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" event={"ID":"4e5d0709-195f-4511-897c-0dd7d15b5275","Type":"ContainerStarted","Data":"22b5f882d64945b2caf70d04b4d4bcb910f1c7a69361ae0b21fdef37063db37c"} Jan 31 07:04:51 crc kubenswrapper[4687]: I0131 07:04:51.441345 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerID="02048f9c545652d2634e3d612cc6ff23ac5893e7fbacce12160bba72ecc11c7b" exitCode=0 Jan 31 07:04:51 crc kubenswrapper[4687]: I0131 07:04:51.441441 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" event={"ID":"4e5d0709-195f-4511-897c-0dd7d15b5275","Type":"ContainerDied","Data":"02048f9c545652d2634e3d612cc6ff23ac5893e7fbacce12160bba72ecc11c7b"} Jan 31 07:04:52 crc kubenswrapper[4687]: I0131 07:04:52.449338 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerID="04b0c2eff28c24a85d20addbeb930b8bc419b0a38c1c266149441dafdb5ecbfa" exitCode=0 Jan 31 07:04:52 crc kubenswrapper[4687]: I0131 07:04:52.449394 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" event={"ID":"4e5d0709-195f-4511-897c-0dd7d15b5275","Type":"ContainerDied","Data":"04b0c2eff28c24a85d20addbeb930b8bc419b0a38c1c266149441dafdb5ecbfa"} Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.694856 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.800023 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rq8r\" (UniqueName: \"kubernetes.io/projected/4e5d0709-195f-4511-897c-0dd7d15b5275-kube-api-access-6rq8r\") pod \"4e5d0709-195f-4511-897c-0dd7d15b5275\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.800580 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-util\") pod \"4e5d0709-195f-4511-897c-0dd7d15b5275\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.800815 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-bundle\") pod \"4e5d0709-195f-4511-897c-0dd7d15b5275\" (UID: \"4e5d0709-195f-4511-897c-0dd7d15b5275\") " Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.801885 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-bundle" (OuterVolumeSpecName: "bundle") pod "4e5d0709-195f-4511-897c-0dd7d15b5275" (UID: "4e5d0709-195f-4511-897c-0dd7d15b5275"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.806676 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e5d0709-195f-4511-897c-0dd7d15b5275-kube-api-access-6rq8r" (OuterVolumeSpecName: "kube-api-access-6rq8r") pod "4e5d0709-195f-4511-897c-0dd7d15b5275" (UID: "4e5d0709-195f-4511-897c-0dd7d15b5275"). InnerVolumeSpecName "kube-api-access-6rq8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.817177 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-util" (OuterVolumeSpecName: "util") pod "4e5d0709-195f-4511-897c-0dd7d15b5275" (UID: "4e5d0709-195f-4511-897c-0dd7d15b5275"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.903158 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.903194 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rq8r\" (UniqueName: \"kubernetes.io/projected/4e5d0709-195f-4511-897c-0dd7d15b5275-kube-api-access-6rq8r\") on node \"crc\" DevicePath \"\"" Jan 31 07:04:53 crc kubenswrapper[4687]: I0131 07:04:53.903206 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e5d0709-195f-4511-897c-0dd7d15b5275-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:04:54 crc kubenswrapper[4687]: I0131 07:04:54.461985 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" event={"ID":"4e5d0709-195f-4511-897c-0dd7d15b5275","Type":"ContainerDied","Data":"22b5f882d64945b2caf70d04b4d4bcb910f1c7a69361ae0b21fdef37063db37c"} Jan 31 07:04:54 crc kubenswrapper[4687]: I0131 07:04:54.462375 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22b5f882d64945b2caf70d04b4d4bcb910f1c7a69361ae0b21fdef37063db37c" Jan 31 07:04:54 crc kubenswrapper[4687]: I0131 07:04:54.462026 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v" Jan 31 07:04:58 crc kubenswrapper[4687]: I0131 07:04:58.684059 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:04:58 crc kubenswrapper[4687]: I0131 07:04:58.685545 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.781245 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft"] Jan 31 07:04:59 crc kubenswrapper[4687]: E0131 07:04:59.781687 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerName="util" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.781703 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerName="util" Jan 31 07:04:59 crc kubenswrapper[4687]: E0131 07:04:59.781734 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerName="extract" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.781741 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerName="extract" Jan 31 07:04:59 crc kubenswrapper[4687]: E0131 07:04:59.781751 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerName="pull" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.781759 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerName="pull" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.781945 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5d0709-195f-4511-897c-0dd7d15b5275" containerName="extract" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.782476 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.784791 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.788311 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-grbdw" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.797274 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft"] Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.982661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2hm\" (UniqueName: \"kubernetes.io/projected/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-kube-api-access-hv2hm\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.982727 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-apiservice-cert\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:04:59 crc kubenswrapper[4687]: I0131 07:04:59.982810 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-webhook-cert\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.083890 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-webhook-cert\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.083995 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv2hm\" (UniqueName: \"kubernetes.io/projected/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-kube-api-access-hv2hm\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.084028 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-apiservice-cert\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.089577 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-apiservice-cert\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.089577 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-webhook-cert\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.106485 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv2hm\" (UniqueName: \"kubernetes.io/projected/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-kube-api-access-hv2hm\") pod \"keystone-operator-controller-manager-cf47c99bb-vb9ft\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.401943 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.601654 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft"] Jan 31 07:05:00 crc kubenswrapper[4687]: I0131 07:05:00.611147 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:05:01 crc kubenswrapper[4687]: I0131 07:05:01.502991 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" event={"ID":"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c","Type":"ContainerStarted","Data":"14456a6935d0160108ffd66d5e60559fb57045bd9da663dabdbc39f5c8056c0d"} Jan 31 07:05:03 crc kubenswrapper[4687]: I0131 07:05:03.517867 4687 generic.go:334] "Generic (PLEG): container finished" podID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerID="703c0d772a929eebfafa746449afc703a9975ddbf680361a13ce0ddeaea5d41f" exitCode=0 Jan 31 07:05:03 crc kubenswrapper[4687]: I0131 07:05:03.517971 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"33674fdf-dc91-46fd-a4d5-795ff7fd4211","Type":"ContainerDied","Data":"703c0d772a929eebfafa746449afc703a9975ddbf680361a13ce0ddeaea5d41f"} Jan 31 07:05:04 crc kubenswrapper[4687]: I0131 07:05:04.527039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"33674fdf-dc91-46fd-a4d5-795ff7fd4211","Type":"ContainerStarted","Data":"1a9e11626e862f9e085c571a1f0dccd5f1c46c3ae1bbacf1035e66065b30d721"} Jan 31 07:05:04 crc kubenswrapper[4687]: I0131 07:05:04.527883 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:05:04 crc kubenswrapper[4687]: I0131 07:05:04.549816 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.829710634 podStartE2EDuration="48.549794453s" podCreationTimestamp="2026-01-31 07:04:16 +0000 UTC" firstStartedPulling="2026-01-31 07:04:18.61803453 +0000 UTC m=+1284.895294105" lastFinishedPulling="2026-01-31 07:04:30.338118339 +0000 UTC m=+1296.615377924" observedRunningTime="2026-01-31 07:05:04.548246451 +0000 UTC m=+1330.825506046" watchObservedRunningTime="2026-01-31 07:05:04.549794453 +0000 UTC m=+1330.827054028" Jan 31 07:05:07 crc kubenswrapper[4687]: I0131 07:05:07.551099 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" event={"ID":"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c","Type":"ContainerStarted","Data":"aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7"} Jan 31 07:05:07 crc kubenswrapper[4687]: I0131 07:05:07.551710 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:07 crc kubenswrapper[4687]: I0131 07:05:07.575112 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" podStartSLOduration=2.225574166 podStartE2EDuration="8.575094477s" podCreationTimestamp="2026-01-31 07:04:59 +0000 UTC" firstStartedPulling="2026-01-31 07:05:00.610690586 +0000 UTC m=+1326.887950151" lastFinishedPulling="2026-01-31 07:05:06.960210887 +0000 UTC m=+1333.237470462" observedRunningTime="2026-01-31 07:05:07.570237674 +0000 UTC m=+1333.847497249" watchObservedRunningTime="2026-01-31 07:05:07.575094477 +0000 UTC m=+1333.852354052" Jan 31 07:05:17 crc kubenswrapper[4687]: I0131 07:05:17.850300 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:05:20 crc kubenswrapper[4687]: I0131 07:05:20.407402 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.621238 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-create-qnsvw"] Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.622546 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.631593 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-1184-account-create-update-jk5qs"] Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.632532 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.636208 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-qnsvw"] Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.644196 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-db-secret" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.661291 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-1184-account-create-update-jk5qs"] Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.798292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-operator-scripts\") pod \"keystone-1184-account-create-update-jk5qs\" (UID: \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\") " pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.798386 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-operator-scripts\") pod \"keystone-db-create-qnsvw\" (UID: \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\") " pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.798498 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfqh\" (UniqueName: \"kubernetes.io/projected/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-kube-api-access-ntfqh\") pod \"keystone-1184-account-create-update-jk5qs\" (UID: \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\") " pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.798553 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2xg\" (UniqueName: \"kubernetes.io/projected/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-kube-api-access-xc2xg\") pod \"keystone-db-create-qnsvw\" (UID: \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\") " pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.900139 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-operator-scripts\") pod \"keystone-1184-account-create-update-jk5qs\" (UID: \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\") " pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.900218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-operator-scripts\") pod \"keystone-db-create-qnsvw\" (UID: \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\") " pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.900244 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntfqh\" (UniqueName: \"kubernetes.io/projected/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-kube-api-access-ntfqh\") pod \"keystone-1184-account-create-update-jk5qs\" (UID: \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\") " pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.900266 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2xg\" (UniqueName: \"kubernetes.io/projected/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-kube-api-access-xc2xg\") pod \"keystone-db-create-qnsvw\" (UID: \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\") " pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.901101 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-operator-scripts\") pod \"keystone-1184-account-create-update-jk5qs\" (UID: \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\") " pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.901120 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-operator-scripts\") pod \"keystone-db-create-qnsvw\" (UID: \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\") " pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.919294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2xg\" (UniqueName: \"kubernetes.io/projected/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-kube-api-access-xc2xg\") pod \"keystone-db-create-qnsvw\" (UID: \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\") " pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.924139 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntfqh\" (UniqueName: \"kubernetes.io/projected/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-kube-api-access-ntfqh\") pod \"keystone-1184-account-create-update-jk5qs\" (UID: \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\") " pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.952575 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:23 crc kubenswrapper[4687]: I0131 07:05:23.965619 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:24 crc kubenswrapper[4687]: I0131 07:05:24.433613 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-qnsvw"] Jan 31 07:05:24 crc kubenswrapper[4687]: W0131 07:05:24.436592 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0b43f28_b08f_4e18_b8bd_d5950c5a9b9d.slice/crio-1b383c457fcf71779bf1e32f3bd91ec9a41ec3cb17525568f4efd501a5b54937 WatchSource:0}: Error finding container 1b383c457fcf71779bf1e32f3bd91ec9a41ec3cb17525568f4efd501a5b54937: Status 404 returned error can't find the container with id 1b383c457fcf71779bf1e32f3bd91ec9a41ec3cb17525568f4efd501a5b54937 Jan 31 07:05:24 crc kubenswrapper[4687]: I0131 07:05:24.440169 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-1184-account-create-update-jk5qs"] Jan 31 07:05:24 crc kubenswrapper[4687]: W0131 07:05:24.448915 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2805bcaf_4eb4_4cd7_89fd_62d3a45abcf9.slice/crio-920a899147e9f9a059f5da055298aad9db1a3c879af610daf028a03decd25f63 WatchSource:0}: Error finding container 920a899147e9f9a059f5da055298aad9db1a3c879af610daf028a03decd25f63: Status 404 returned error can't find the container with id 920a899147e9f9a059f5da055298aad9db1a3c879af610daf028a03decd25f63 Jan 31 07:05:24 crc kubenswrapper[4687]: I0131 07:05:24.761342 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" event={"ID":"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9","Type":"ContainerStarted","Data":"425544d1e116c741e09a69d5d1ebfcf1c1299fa94ee06c8ccaeb707c8a7ea626"} Jan 31 07:05:24 crc kubenswrapper[4687]: I0131 07:05:24.761715 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" event={"ID":"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9","Type":"ContainerStarted","Data":"920a899147e9f9a059f5da055298aad9db1a3c879af610daf028a03decd25f63"} Jan 31 07:05:24 crc kubenswrapper[4687]: I0131 07:05:24.762643 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-qnsvw" event={"ID":"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d","Type":"ContainerStarted","Data":"10ce8385a1c96b6fa17884b6b553750bff500d1b9ed3bde539703af5b29d9260"} Jan 31 07:05:24 crc kubenswrapper[4687]: I0131 07:05:24.762670 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-qnsvw" event={"ID":"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d","Type":"ContainerStarted","Data":"1b383c457fcf71779bf1e32f3bd91ec9a41ec3cb17525568f4efd501a5b54937"} Jan 31 07:05:24 crc kubenswrapper[4687]: I0131 07:05:24.782906 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" podStartSLOduration=1.7828887089999998 podStartE2EDuration="1.782888709s" podCreationTimestamp="2026-01-31 07:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:05:24.778666754 +0000 UTC m=+1351.055926339" watchObservedRunningTime="2026-01-31 07:05:24.782888709 +0000 UTC m=+1351.060148284" Jan 31 07:05:24 crc kubenswrapper[4687]: I0131 07:05:24.801362 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-create-qnsvw" podStartSLOduration=1.801346654 podStartE2EDuration="1.801346654s" podCreationTimestamp="2026-01-31 07:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:05:24.799508574 +0000 UTC m=+1351.076768159" watchObservedRunningTime="2026-01-31 07:05:24.801346654 +0000 UTC m=+1351.078606229" Jan 31 07:05:25 crc kubenswrapper[4687]: I0131 07:05:25.771116 4687 generic.go:334] "Generic (PLEG): container finished" podID="2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9" containerID="425544d1e116c741e09a69d5d1ebfcf1c1299fa94ee06c8ccaeb707c8a7ea626" exitCode=0 Jan 31 07:05:25 crc kubenswrapper[4687]: I0131 07:05:25.771233 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" event={"ID":"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9","Type":"ContainerDied","Data":"425544d1e116c741e09a69d5d1ebfcf1c1299fa94ee06c8ccaeb707c8a7ea626"} Jan 31 07:05:25 crc kubenswrapper[4687]: I0131 07:05:25.772952 4687 generic.go:334] "Generic (PLEG): container finished" podID="a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d" containerID="10ce8385a1c96b6fa17884b6b553750bff500d1b9ed3bde539703af5b29d9260" exitCode=0 Jan 31 07:05:25 crc kubenswrapper[4687]: I0131 07:05:25.772987 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-qnsvw" event={"ID":"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d","Type":"ContainerDied","Data":"10ce8385a1c96b6fa17884b6b553750bff500d1b9ed3bde539703af5b29d9260"} Jan 31 07:05:26 crc kubenswrapper[4687]: I0131 07:05:26.409064 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-jbhw9"] Jan 31 07:05:26 crc kubenswrapper[4687]: I0131 07:05:26.410444 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-jbhw9" Jan 31 07:05:26 crc kubenswrapper[4687]: I0131 07:05:26.412736 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-index-dockercfg-qzbmr" Jan 31 07:05:26 crc kubenswrapper[4687]: I0131 07:05:26.425115 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-jbhw9"] Jan 31 07:05:26 crc kubenswrapper[4687]: I0131 07:05:26.497075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5cd\" (UniqueName: \"kubernetes.io/projected/5e489106-6a31-463b-98e6-62460f2ca169-kube-api-access-gf5cd\") pod \"horizon-operator-index-jbhw9\" (UID: \"5e489106-6a31-463b-98e6-62460f2ca169\") " pod="openstack-operators/horizon-operator-index-jbhw9" Jan 31 07:05:26 crc kubenswrapper[4687]: I0131 07:05:26.597865 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5cd\" (UniqueName: \"kubernetes.io/projected/5e489106-6a31-463b-98e6-62460f2ca169-kube-api-access-gf5cd\") pod \"horizon-operator-index-jbhw9\" (UID: \"5e489106-6a31-463b-98e6-62460f2ca169\") " pod="openstack-operators/horizon-operator-index-jbhw9" Jan 31 07:05:26 crc kubenswrapper[4687]: I0131 07:05:26.622317 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5cd\" (UniqueName: \"kubernetes.io/projected/5e489106-6a31-463b-98e6-62460f2ca169-kube-api-access-gf5cd\") pod \"horizon-operator-index-jbhw9\" (UID: \"5e489106-6a31-463b-98e6-62460f2ca169\") " pod="openstack-operators/horizon-operator-index-jbhw9" Jan 31 07:05:26 crc kubenswrapper[4687]: I0131 07:05:26.730996 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-jbhw9" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.449188 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.454203 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.557715 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-operator-scripts\") pod \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\" (UID: \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\") " Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.557852 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-operator-scripts\") pod \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\" (UID: \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\") " Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.558507 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d" (UID: "a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.558531 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9" (UID: "2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.558635 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntfqh\" (UniqueName: \"kubernetes.io/projected/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-kube-api-access-ntfqh\") pod \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\" (UID: \"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9\") " Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.558778 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc2xg\" (UniqueName: \"kubernetes.io/projected/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-kube-api-access-xc2xg\") pod \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\" (UID: \"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d\") " Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.559493 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.559518 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.562617 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-kube-api-access-ntfqh" (OuterVolumeSpecName: "kube-api-access-ntfqh") pod "2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9" (UID: "2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9"). InnerVolumeSpecName "kube-api-access-ntfqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.562743 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-kube-api-access-xc2xg" (OuterVolumeSpecName: "kube-api-access-xc2xg") pod "a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d" (UID: "a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d"). InnerVolumeSpecName "kube-api-access-xc2xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.618473 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-jbhw9"] Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.661520 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntfqh\" (UniqueName: \"kubernetes.io/projected/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9-kube-api-access-ntfqh\") on node \"crc\" DevicePath \"\"" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.661560 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc2xg\" (UniqueName: \"kubernetes.io/projected/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d-kube-api-access-xc2xg\") on node \"crc\" DevicePath \"\"" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.785165 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.785180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-1184-account-create-update-jk5qs" event={"ID":"2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9","Type":"ContainerDied","Data":"920a899147e9f9a059f5da055298aad9db1a3c879af610daf028a03decd25f63"} Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.785547 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="920a899147e9f9a059f5da055298aad9db1a3c879af610daf028a03decd25f63" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.786458 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-qnsvw" event={"ID":"a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d","Type":"ContainerDied","Data":"1b383c457fcf71779bf1e32f3bd91ec9a41ec3cb17525568f4efd501a5b54937"} Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.786588 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b383c457fcf71779bf1e32f3bd91ec9a41ec3cb17525568f4efd501a5b54937" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.786495 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-qnsvw" Jan 31 07:05:27 crc kubenswrapper[4687]: I0131 07:05:27.787995 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-jbhw9" event={"ID":"5e489106-6a31-463b-98e6-62460f2ca169","Type":"ContainerStarted","Data":"410399e3cd72e1b875b5a4f6fad19d0146f6f9f4af30b4d857b1edd71b62e3b1"} Jan 31 07:05:28 crc kubenswrapper[4687]: I0131 07:05:28.683993 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:05:28 crc kubenswrapper[4687]: I0131 07:05:28.684363 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:05:28 crc kubenswrapper[4687]: I0131 07:05:28.684474 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 07:05:28 crc kubenswrapper[4687]: I0131 07:05:28.685116 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4ad799ecadff0d9823e53b53153bf63acdd5cce54e7a1eb02184f7b2a6947f6"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:05:28 crc kubenswrapper[4687]: I0131 07:05:28.685168 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://f4ad799ecadff0d9823e53b53153bf63acdd5cce54e7a1eb02184f7b2a6947f6" gracePeriod=600 Jan 31 07:05:28 crc kubenswrapper[4687]: I0131 07:05:28.829780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-jbhw9" event={"ID":"5e489106-6a31-463b-98e6-62460f2ca169","Type":"ContainerStarted","Data":"0ef3df8fb45b01c3dd6748f91fe343eecf56690f4c5de67888e377400907f49e"} Jan 31 07:05:28 crc kubenswrapper[4687]: I0131 07:05:28.852861 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-jbhw9" podStartSLOduration=2.075942511 podStartE2EDuration="2.85284066s" podCreationTimestamp="2026-01-31 07:05:26 +0000 UTC" firstStartedPulling="2026-01-31 07:05:27.635487901 +0000 UTC m=+1353.912747476" lastFinishedPulling="2026-01-31 07:05:28.41238605 +0000 UTC m=+1354.689645625" observedRunningTime="2026-01-31 07:05:28.852231574 +0000 UTC m=+1355.129491159" watchObservedRunningTime="2026-01-31 07:05:28.85284066 +0000 UTC m=+1355.130100245" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.292368 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-sync-ttw96"] Jan 31 07:05:29 crc kubenswrapper[4687]: E0131 07:05:29.293001 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9" containerName="mariadb-account-create-update" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.293021 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9" containerName="mariadb-account-create-update" Jan 31 07:05:29 crc kubenswrapper[4687]: E0131 07:05:29.293037 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d" containerName="mariadb-database-create" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.293045 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d" containerName="mariadb-database-create" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.293192 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d" containerName="mariadb-database-create" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.293204 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9" containerName="mariadb-account-create-update" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.293741 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.296055 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.296093 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.296501 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-9spm5" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.302076 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.308422 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-ttw96"] Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.390904 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q58j\" (UniqueName: \"kubernetes.io/projected/766b071f-fb29-43d1-be22-a261a8cb787c-kube-api-access-7q58j\") pod \"keystone-db-sync-ttw96\" (UID: \"766b071f-fb29-43d1-be22-a261a8cb787c\") " pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.391237 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766b071f-fb29-43d1-be22-a261a8cb787c-config-data\") pod \"keystone-db-sync-ttw96\" (UID: \"766b071f-fb29-43d1-be22-a261a8cb787c\") " pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.492730 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q58j\" (UniqueName: \"kubernetes.io/projected/766b071f-fb29-43d1-be22-a261a8cb787c-kube-api-access-7q58j\") pod \"keystone-db-sync-ttw96\" (UID: \"766b071f-fb29-43d1-be22-a261a8cb787c\") " pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.493074 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766b071f-fb29-43d1-be22-a261a8cb787c-config-data\") pod \"keystone-db-sync-ttw96\" (UID: \"766b071f-fb29-43d1-be22-a261a8cb787c\") " pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.499076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766b071f-fb29-43d1-be22-a261a8cb787c-config-data\") pod \"keystone-db-sync-ttw96\" (UID: \"766b071f-fb29-43d1-be22-a261a8cb787c\") " pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.513090 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q58j\" (UniqueName: \"kubernetes.io/projected/766b071f-fb29-43d1-be22-a261a8cb787c-kube-api-access-7q58j\") pod \"keystone-db-sync-ttw96\" (UID: \"766b071f-fb29-43d1-be22-a261a8cb787c\") " pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.615878 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.839307 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="f4ad799ecadff0d9823e53b53153bf63acdd5cce54e7a1eb02184f7b2a6947f6" exitCode=0 Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.839369 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"f4ad799ecadff0d9823e53b53153bf63acdd5cce54e7a1eb02184f7b2a6947f6"} Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.839791 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c"} Jan 31 07:05:29 crc kubenswrapper[4687]: I0131 07:05:29.839826 4687 scope.go:117] "RemoveContainer" containerID="2870678d8ef3b4ce66abc3a889acd9cf6e04c0f95a1291bebaab2b0448491609" Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.008559 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-ttw96"] Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.035531 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-tnwzr"] Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.036626 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.039864 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-kq5d7" Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.050626 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-tnwzr"] Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.204604 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrmw\" (UniqueName: \"kubernetes.io/projected/eab13481-b0e4-40a4-8541-7738638251a9-kube-api-access-dxrmw\") pod \"swift-operator-index-tnwzr\" (UID: \"eab13481-b0e4-40a4-8541-7738638251a9\") " pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.306379 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrmw\" (UniqueName: \"kubernetes.io/projected/eab13481-b0e4-40a4-8541-7738638251a9-kube-api-access-dxrmw\") pod \"swift-operator-index-tnwzr\" (UID: \"eab13481-b0e4-40a4-8541-7738638251a9\") " pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.329436 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrmw\" (UniqueName: \"kubernetes.io/projected/eab13481-b0e4-40a4-8541-7738638251a9-kube-api-access-dxrmw\") pod \"swift-operator-index-tnwzr\" (UID: \"eab13481-b0e4-40a4-8541-7738638251a9\") " pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.370064 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.850020 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-ttw96" event={"ID":"766b071f-fb29-43d1-be22-a261a8cb787c","Type":"ContainerStarted","Data":"9955655998eafef48390b8b6ba9363999963a5ff70a2e2b35759d6adfde82a38"} Jan 31 07:05:30 crc kubenswrapper[4687]: I0131 07:05:30.985009 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-tnwzr"] Jan 31 07:05:30 crc kubenswrapper[4687]: W0131 07:05:30.986690 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeab13481_b0e4_40a4_8541_7738638251a9.slice/crio-e6d5b8ddcd14f246d4d608a0dafc5908716f80a66b9ddf3784bea871e54f6b82 WatchSource:0}: Error finding container e6d5b8ddcd14f246d4d608a0dafc5908716f80a66b9ddf3784bea871e54f6b82: Status 404 returned error can't find the container with id e6d5b8ddcd14f246d4d608a0dafc5908716f80a66b9ddf3784bea871e54f6b82 Jan 31 07:05:31 crc kubenswrapper[4687]: I0131 07:05:31.865972 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tnwzr" event={"ID":"eab13481-b0e4-40a4-8541-7738638251a9","Type":"ContainerStarted","Data":"e6d5b8ddcd14f246d4d608a0dafc5908716f80a66b9ddf3784bea871e54f6b82"} Jan 31 07:05:32 crc kubenswrapper[4687]: I0131 07:05:32.206332 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-index-jbhw9"] Jan 31 07:05:32 crc kubenswrapper[4687]: I0131 07:05:32.206583 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/horizon-operator-index-jbhw9" podUID="5e489106-6a31-463b-98e6-62460f2ca169" containerName="registry-server" containerID="cri-o://0ef3df8fb45b01c3dd6748f91fe343eecf56690f4c5de67888e377400907f49e" gracePeriod=2 Jan 31 07:05:32 crc kubenswrapper[4687]: I0131 07:05:32.876225 4687 generic.go:334] "Generic (PLEG): container finished" podID="5e489106-6a31-463b-98e6-62460f2ca169" containerID="0ef3df8fb45b01c3dd6748f91fe343eecf56690f4c5de67888e377400907f49e" exitCode=0 Jan 31 07:05:32 crc kubenswrapper[4687]: I0131 07:05:32.876317 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-jbhw9" event={"ID":"5e489106-6a31-463b-98e6-62460f2ca169","Type":"ContainerDied","Data":"0ef3df8fb45b01c3dd6748f91fe343eecf56690f4c5de67888e377400907f49e"} Jan 31 07:05:32 crc kubenswrapper[4687]: I0131 07:05:32.879373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tnwzr" event={"ID":"eab13481-b0e4-40a4-8541-7738638251a9","Type":"ContainerStarted","Data":"5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2"} Jan 31 07:05:32 crc kubenswrapper[4687]: I0131 07:05:32.900464 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-tnwzr" podStartSLOduration=2.707024035 podStartE2EDuration="3.90044259s" podCreationTimestamp="2026-01-31 07:05:29 +0000 UTC" firstStartedPulling="2026-01-31 07:05:30.989434158 +0000 UTC m=+1357.266693733" lastFinishedPulling="2026-01-31 07:05:32.182852713 +0000 UTC m=+1358.460112288" observedRunningTime="2026-01-31 07:05:32.894903999 +0000 UTC m=+1359.172163584" watchObservedRunningTime="2026-01-31 07:05:32.90044259 +0000 UTC m=+1359.177702175" Jan 31 07:05:33 crc kubenswrapper[4687]: I0131 07:05:33.013901 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-zb8pz"] Jan 31 07:05:33 crc kubenswrapper[4687]: I0131 07:05:33.014927 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:33 crc kubenswrapper[4687]: I0131 07:05:33.027871 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-zb8pz"] Jan 31 07:05:33 crc kubenswrapper[4687]: I0131 07:05:33.090756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m549z\" (UniqueName: \"kubernetes.io/projected/f412fd69-af65-4534-97fc-1ddbd4ec579d-kube-api-access-m549z\") pod \"horizon-operator-index-zb8pz\" (UID: \"f412fd69-af65-4534-97fc-1ddbd4ec579d\") " pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:33 crc kubenswrapper[4687]: I0131 07:05:33.191848 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m549z\" (UniqueName: \"kubernetes.io/projected/f412fd69-af65-4534-97fc-1ddbd4ec579d-kube-api-access-m549z\") pod \"horizon-operator-index-zb8pz\" (UID: \"f412fd69-af65-4534-97fc-1ddbd4ec579d\") " pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:33 crc kubenswrapper[4687]: I0131 07:05:33.355366 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m549z\" (UniqueName: \"kubernetes.io/projected/f412fd69-af65-4534-97fc-1ddbd4ec579d-kube-api-access-m549z\") pod \"horizon-operator-index-zb8pz\" (UID: \"f412fd69-af65-4534-97fc-1ddbd4ec579d\") " pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:33 crc kubenswrapper[4687]: I0131 07:05:33.641742 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.676228 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-jbhw9" Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.826766 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf5cd\" (UniqueName: \"kubernetes.io/projected/5e489106-6a31-463b-98e6-62460f2ca169-kube-api-access-gf5cd\") pod \"5e489106-6a31-463b-98e6-62460f2ca169\" (UID: \"5e489106-6a31-463b-98e6-62460f2ca169\") " Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.835312 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e489106-6a31-463b-98e6-62460f2ca169-kube-api-access-gf5cd" (OuterVolumeSpecName: "kube-api-access-gf5cd") pod "5e489106-6a31-463b-98e6-62460f2ca169" (UID: "5e489106-6a31-463b-98e6-62460f2ca169"). InnerVolumeSpecName "kube-api-access-gf5cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.927999 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf5cd\" (UniqueName: \"kubernetes.io/projected/5e489106-6a31-463b-98e6-62460f2ca169-kube-api-access-gf5cd\") on node \"crc\" DevicePath \"\"" Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.931939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-jbhw9" event={"ID":"5e489106-6a31-463b-98e6-62460f2ca169","Type":"ContainerDied","Data":"410399e3cd72e1b875b5a4f6fad19d0146f6f9f4af30b4d857b1edd71b62e3b1"} Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.931986 4687 scope.go:117] "RemoveContainer" containerID="0ef3df8fb45b01c3dd6748f91fe343eecf56690f4c5de67888e377400907f49e" Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.931986 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-jbhw9" Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.964286 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-index-jbhw9"] Jan 31 07:05:36 crc kubenswrapper[4687]: I0131 07:05:36.970322 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/horizon-operator-index-jbhw9"] Jan 31 07:05:37 crc kubenswrapper[4687]: I0131 07:05:37.612531 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e489106-6a31-463b-98e6-62460f2ca169" path="/var/lib/kubelet/pods/5e489106-6a31-463b-98e6-62460f2ca169/volumes" Jan 31 07:05:40 crc kubenswrapper[4687]: I0131 07:05:40.370813 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:40 crc kubenswrapper[4687]: I0131 07:05:40.371078 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:40 crc kubenswrapper[4687]: I0131 07:05:40.425032 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:41 crc kubenswrapper[4687]: I0131 07:05:41.253089 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:05:47 crc kubenswrapper[4687]: E0131 07:05:47.186284 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Jan 31 07:05:47 crc kubenswrapper[4687]: E0131 07:05:47.186926 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7q58j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-ttw96_glance-kuttl-tests(766b071f-fb29-43d1-be22-a261a8cb787c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:05:47 crc kubenswrapper[4687]: E0131 07:05:47.188117 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="glance-kuttl-tests/keystone-db-sync-ttw96" podUID="766b071f-fb29-43d1-be22-a261a8cb787c" Jan 31 07:05:47 crc kubenswrapper[4687]: I0131 07:05:47.581906 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-zb8pz"] Jan 31 07:05:47 crc kubenswrapper[4687]: W0131 07:05:47.592467 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf412fd69_af65_4534_97fc_1ddbd4ec579d.slice/crio-2ea8aa35a3c7d4590acd494d2f04d088719d3b21d6dad4e98b76b7d0cad88a2c WatchSource:0}: Error finding container 2ea8aa35a3c7d4590acd494d2f04d088719d3b21d6dad4e98b76b7d0cad88a2c: Status 404 returned error can't find the container with id 2ea8aa35a3c7d4590acd494d2f04d088719d3b21d6dad4e98b76b7d0cad88a2c Jan 31 07:05:48 crc kubenswrapper[4687]: I0131 07:05:48.036845 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-zb8pz" event={"ID":"f412fd69-af65-4534-97fc-1ddbd4ec579d","Type":"ContainerStarted","Data":"2ea8aa35a3c7d4590acd494d2f04d088719d3b21d6dad4e98b76b7d0cad88a2c"} Jan 31 07:05:48 crc kubenswrapper[4687]: E0131 07:05:48.039543 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="glance-kuttl-tests/keystone-db-sync-ttw96" podUID="766b071f-fb29-43d1-be22-a261a8cb787c" Jan 31 07:05:49 crc kubenswrapper[4687]: I0131 07:05:49.046247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-zb8pz" event={"ID":"f412fd69-af65-4534-97fc-1ddbd4ec579d","Type":"ContainerStarted","Data":"99ac48c2772ae62e9dc32e5f9e7148cd0aa5b8a5774c92b42c6811a467862847"} Jan 31 07:05:49 crc kubenswrapper[4687]: I0131 07:05:49.072930 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-zb8pz" podStartSLOduration=16.395997599 podStartE2EDuration="17.072906503s" podCreationTimestamp="2026-01-31 07:05:32 +0000 UTC" firstStartedPulling="2026-01-31 07:05:47.595156717 +0000 UTC m=+1373.872416292" lastFinishedPulling="2026-01-31 07:05:48.272065621 +0000 UTC m=+1374.549325196" observedRunningTime="2026-01-31 07:05:49.0665547 +0000 UTC m=+1375.343814285" watchObservedRunningTime="2026-01-31 07:05:49.072906503 +0000 UTC m=+1375.350166098" Jan 31 07:05:53 crc kubenswrapper[4687]: I0131 07:05:53.642257 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:53 crc kubenswrapper[4687]: I0131 07:05:53.642602 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:53 crc kubenswrapper[4687]: I0131 07:05:53.672345 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:54 crc kubenswrapper[4687]: I0131 07:05:54.128363 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-index-zb8pz" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.654047 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2"] Jan 31 07:05:59 crc kubenswrapper[4687]: E0131 07:05:59.654756 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e489106-6a31-463b-98e6-62460f2ca169" containerName="registry-server" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.654768 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e489106-6a31-463b-98e6-62460f2ca169" containerName="registry-server" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.654898 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e489106-6a31-463b-98e6-62460f2ca169" containerName="registry-server" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.668326 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.671088 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9sffv" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.677060 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2"] Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.774650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-bundle\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.774719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z25jq\" (UniqueName: \"kubernetes.io/projected/c6cf66be-126e-4ac2-ba8b-165628cd03e7-kube-api-access-z25jq\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.774753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-util\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.875920 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-bundle\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.875981 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z25jq\" (UniqueName: \"kubernetes.io/projected/c6cf66be-126e-4ac2-ba8b-165628cd03e7-kube-api-access-z25jq\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.876006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-util\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.876646 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-bundle\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.876698 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-util\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:05:59 crc kubenswrapper[4687]: I0131 07:05:59.902177 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z25jq\" (UniqueName: \"kubernetes.io/projected/c6cf66be-126e-4ac2-ba8b-165628cd03e7-kube-api-access-z25jq\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.028666 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.449052 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2"] Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.460569 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4"] Jan 31 07:06:00 crc kubenswrapper[4687]: W0131 07:06:00.452880 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6cf66be_126e_4ac2_ba8b_165628cd03e7.slice/crio-8c05ab9a3e031f1352b827b74d28be1f7552637304fb9ddddc21728e4c6157a8 WatchSource:0}: Error finding container 8c05ab9a3e031f1352b827b74d28be1f7552637304fb9ddddc21728e4c6157a8: Status 404 returned error can't find the container with id 8c05ab9a3e031f1352b827b74d28be1f7552637304fb9ddddc21728e4c6157a8 Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.462595 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.480696 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4"] Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.584719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/8040a852-f1a4-420b-9897-a1c71c5b231c-kube-api-access-ff2f6\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.584791 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-bundle\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.584910 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-util\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.686491 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/8040a852-f1a4-420b-9897-a1c71c5b231c-kube-api-access-ff2f6\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.686554 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-bundle\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.686625 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-util\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.687066 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-util\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.687439 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-bundle\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.704791 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/8040a852-f1a4-420b-9897-a1c71c5b231c-kube-api-access-ff2f6\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:00 crc kubenswrapper[4687]: I0131 07:06:00.848012 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:01 crc kubenswrapper[4687]: I0131 07:06:01.130583 4687 generic.go:334] "Generic (PLEG): container finished" podID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerID="b46ba4d2b537c3f02a9f1f4d6c3789e196ba9b704ef5f01c7e634c36356f5c3c" exitCode=0 Jan 31 07:06:01 crc kubenswrapper[4687]: I0131 07:06:01.130635 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" event={"ID":"c6cf66be-126e-4ac2-ba8b-165628cd03e7","Type":"ContainerDied","Data":"b46ba4d2b537c3f02a9f1f4d6c3789e196ba9b704ef5f01c7e634c36356f5c3c"} Jan 31 07:06:01 crc kubenswrapper[4687]: I0131 07:06:01.130662 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" event={"ID":"c6cf66be-126e-4ac2-ba8b-165628cd03e7","Type":"ContainerStarted","Data":"8c05ab9a3e031f1352b827b74d28be1f7552637304fb9ddddc21728e4c6157a8"} Jan 31 07:06:01 crc kubenswrapper[4687]: I0131 07:06:01.268016 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4"] Jan 31 07:06:02 crc kubenswrapper[4687]: I0131 07:06:02.137368 4687 generic.go:334] "Generic (PLEG): container finished" podID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerID="c7bb073dc7f769dd63a4792ed024ae1b02144faef1b1ab6829129879b46af964" exitCode=0 Jan 31 07:06:02 crc kubenswrapper[4687]: I0131 07:06:02.137465 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" event={"ID":"8040a852-f1a4-420b-9897-a1c71c5b231c","Type":"ContainerDied","Data":"c7bb073dc7f769dd63a4792ed024ae1b02144faef1b1ab6829129879b46af964"} Jan 31 07:06:02 crc kubenswrapper[4687]: I0131 07:06:02.138488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" event={"ID":"8040a852-f1a4-420b-9897-a1c71c5b231c","Type":"ContainerStarted","Data":"3f736c5c6acc35ac4fb51ace0e2094b00e44da7b4282a64d693fd60a64143d3c"} Jan 31 07:06:03 crc kubenswrapper[4687]: I0131 07:06:03.165232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-ttw96" event={"ID":"766b071f-fb29-43d1-be22-a261a8cb787c","Type":"ContainerStarted","Data":"89329f80cc98acba809d1e66423207645be5bd9e81f2673e543f48e946e636d0"} Jan 31 07:06:03 crc kubenswrapper[4687]: I0131 07:06:03.167000 4687 generic.go:334] "Generic (PLEG): container finished" podID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerID="a99ba4d066d011b3a277540ca7c805d48800c264d1c1f863bf2c2bcc8d7c2977" exitCode=0 Jan 31 07:06:03 crc kubenswrapper[4687]: I0131 07:06:03.167065 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" event={"ID":"c6cf66be-126e-4ac2-ba8b-165628cd03e7","Type":"ContainerDied","Data":"a99ba4d066d011b3a277540ca7c805d48800c264d1c1f863bf2c2bcc8d7c2977"} Jan 31 07:06:03 crc kubenswrapper[4687]: I0131 07:06:03.169739 4687 generic.go:334] "Generic (PLEG): container finished" podID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerID="60d6618f95692b0b3804ab25bf0dfc4b23400823fad87c30a2ee78a94721869a" exitCode=0 Jan 31 07:06:03 crc kubenswrapper[4687]: I0131 07:06:03.169802 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" event={"ID":"8040a852-f1a4-420b-9897-a1c71c5b231c","Type":"ContainerDied","Data":"60d6618f95692b0b3804ab25bf0dfc4b23400823fad87c30a2ee78a94721869a"} Jan 31 07:06:03 crc kubenswrapper[4687]: I0131 07:06:03.184600 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-sync-ttw96" podStartSLOduration=2.154717327 podStartE2EDuration="34.184579419s" podCreationTimestamp="2026-01-31 07:05:29 +0000 UTC" firstStartedPulling="2026-01-31 07:05:30.037162336 +0000 UTC m=+1356.314421901" lastFinishedPulling="2026-01-31 07:06:02.067024418 +0000 UTC m=+1388.344283993" observedRunningTime="2026-01-31 07:06:03.180489637 +0000 UTC m=+1389.457749222" watchObservedRunningTime="2026-01-31 07:06:03.184579419 +0000 UTC m=+1389.461838994" Jan 31 07:06:04 crc kubenswrapper[4687]: I0131 07:06:04.186604 4687 generic.go:334] "Generic (PLEG): container finished" podID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerID="373e70bcce4ea69a81848655d3ead83c2f1b16925d196b9b063b5d103381c87e" exitCode=0 Jan 31 07:06:04 crc kubenswrapper[4687]: I0131 07:06:04.186660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" event={"ID":"c6cf66be-126e-4ac2-ba8b-165628cd03e7","Type":"ContainerDied","Data":"373e70bcce4ea69a81848655d3ead83c2f1b16925d196b9b063b5d103381c87e"} Jan 31 07:06:04 crc kubenswrapper[4687]: I0131 07:06:04.189540 4687 generic.go:334] "Generic (PLEG): container finished" podID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerID="f3dac2f344ac8587ce79cc87e2f06f5b5cdba47a51ed5d45dee28cfa391fed31" exitCode=0 Jan 31 07:06:04 crc kubenswrapper[4687]: I0131 07:06:04.189577 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" event={"ID":"8040a852-f1a4-420b-9897-a1c71c5b231c","Type":"ContainerDied","Data":"f3dac2f344ac8587ce79cc87e2f06f5b5cdba47a51ed5d45dee28cfa391fed31"} Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.521024 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.534028 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.674761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/8040a852-f1a4-420b-9897-a1c71c5b231c-kube-api-access-ff2f6\") pod \"8040a852-f1a4-420b-9897-a1c71c5b231c\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.674859 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-util\") pod \"8040a852-f1a4-420b-9897-a1c71c5b231c\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.674892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-bundle\") pod \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.674930 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-util\") pod \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.674987 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z25jq\" (UniqueName: \"kubernetes.io/projected/c6cf66be-126e-4ac2-ba8b-165628cd03e7-kube-api-access-z25jq\") pod \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\" (UID: \"c6cf66be-126e-4ac2-ba8b-165628cd03e7\") " Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.675111 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-bundle\") pod \"8040a852-f1a4-420b-9897-a1c71c5b231c\" (UID: \"8040a852-f1a4-420b-9897-a1c71c5b231c\") " Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.676173 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-bundle" (OuterVolumeSpecName: "bundle") pod "8040a852-f1a4-420b-9897-a1c71c5b231c" (UID: "8040a852-f1a4-420b-9897-a1c71c5b231c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.676745 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-bundle" (OuterVolumeSpecName: "bundle") pod "c6cf66be-126e-4ac2-ba8b-165628cd03e7" (UID: "c6cf66be-126e-4ac2-ba8b-165628cd03e7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.682071 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cf66be-126e-4ac2-ba8b-165628cd03e7-kube-api-access-z25jq" (OuterVolumeSpecName: "kube-api-access-z25jq") pod "c6cf66be-126e-4ac2-ba8b-165628cd03e7" (UID: "c6cf66be-126e-4ac2-ba8b-165628cd03e7"). InnerVolumeSpecName "kube-api-access-z25jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.682442 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8040a852-f1a4-420b-9897-a1c71c5b231c-kube-api-access-ff2f6" (OuterVolumeSpecName: "kube-api-access-ff2f6") pod "8040a852-f1a4-420b-9897-a1c71c5b231c" (UID: "8040a852-f1a4-420b-9897-a1c71c5b231c"). InnerVolumeSpecName "kube-api-access-ff2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.690219 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-util" (OuterVolumeSpecName: "util") pod "8040a852-f1a4-420b-9897-a1c71c5b231c" (UID: "8040a852-f1a4-420b-9897-a1c71c5b231c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.690308 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-util" (OuterVolumeSpecName: "util") pod "c6cf66be-126e-4ac2-ba8b-165628cd03e7" (UID: "c6cf66be-126e-4ac2-ba8b-165628cd03e7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.776889 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.776940 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/8040a852-f1a4-420b-9897-a1c71c5b231c-kube-api-access-ff2f6\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.776952 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8040a852-f1a4-420b-9897-a1c71c5b231c-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.776963 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.776975 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6cf66be-126e-4ac2-ba8b-165628cd03e7-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:05 crc kubenswrapper[4687]: I0131 07:06:05.776986 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z25jq\" (UniqueName: \"kubernetes.io/projected/c6cf66be-126e-4ac2-ba8b-165628cd03e7-kube-api-access-z25jq\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:06 crc kubenswrapper[4687]: I0131 07:06:06.203021 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" event={"ID":"c6cf66be-126e-4ac2-ba8b-165628cd03e7","Type":"ContainerDied","Data":"8c05ab9a3e031f1352b827b74d28be1f7552637304fb9ddddc21728e4c6157a8"} Jan 31 07:06:06 crc kubenswrapper[4687]: I0131 07:06:06.203370 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c05ab9a3e031f1352b827b74d28be1f7552637304fb9ddddc21728e4c6157a8" Jan 31 07:06:06 crc kubenswrapper[4687]: I0131 07:06:06.203213 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2" Jan 31 07:06:06 crc kubenswrapper[4687]: I0131 07:06:06.204999 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" event={"ID":"8040a852-f1a4-420b-9897-a1c71c5b231c","Type":"ContainerDied","Data":"3f736c5c6acc35ac4fb51ace0e2094b00e44da7b4282a64d693fd60a64143d3c"} Jan 31 07:06:06 crc kubenswrapper[4687]: I0131 07:06:06.205038 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f736c5c6acc35ac4fb51ace0e2094b00e44da7b4282a64d693fd60a64143d3c" Jan 31 07:06:06 crc kubenswrapper[4687]: I0131 07:06:06.205051 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4" Jan 31 07:06:07 crc kubenswrapper[4687]: I0131 07:06:07.213380 4687 generic.go:334] "Generic (PLEG): container finished" podID="766b071f-fb29-43d1-be22-a261a8cb787c" containerID="89329f80cc98acba809d1e66423207645be5bd9e81f2673e543f48e946e636d0" exitCode=0 Jan 31 07:06:07 crc kubenswrapper[4687]: I0131 07:06:07.213471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-ttw96" event={"ID":"766b071f-fb29-43d1-be22-a261a8cb787c","Type":"ContainerDied","Data":"89329f80cc98acba809d1e66423207645be5bd9e81f2673e543f48e946e636d0"} Jan 31 07:06:08 crc kubenswrapper[4687]: I0131 07:06:08.483108 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:06:08 crc kubenswrapper[4687]: I0131 07:06:08.657490 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766b071f-fb29-43d1-be22-a261a8cb787c-config-data\") pod \"766b071f-fb29-43d1-be22-a261a8cb787c\" (UID: \"766b071f-fb29-43d1-be22-a261a8cb787c\") " Jan 31 07:06:08 crc kubenswrapper[4687]: I0131 07:06:08.657572 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q58j\" (UniqueName: \"kubernetes.io/projected/766b071f-fb29-43d1-be22-a261a8cb787c-kube-api-access-7q58j\") pod \"766b071f-fb29-43d1-be22-a261a8cb787c\" (UID: \"766b071f-fb29-43d1-be22-a261a8cb787c\") " Jan 31 07:06:08 crc kubenswrapper[4687]: I0131 07:06:08.677616 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766b071f-fb29-43d1-be22-a261a8cb787c-kube-api-access-7q58j" (OuterVolumeSpecName: "kube-api-access-7q58j") pod "766b071f-fb29-43d1-be22-a261a8cb787c" (UID: "766b071f-fb29-43d1-be22-a261a8cb787c"). InnerVolumeSpecName "kube-api-access-7q58j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:06:08 crc kubenswrapper[4687]: I0131 07:06:08.695400 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766b071f-fb29-43d1-be22-a261a8cb787c-config-data" (OuterVolumeSpecName: "config-data") pod "766b071f-fb29-43d1-be22-a261a8cb787c" (UID: "766b071f-fb29-43d1-be22-a261a8cb787c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:06:08 crc kubenswrapper[4687]: I0131 07:06:08.759018 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766b071f-fb29-43d1-be22-a261a8cb787c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:08 crc kubenswrapper[4687]: I0131 07:06:08.759067 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q58j\" (UniqueName: \"kubernetes.io/projected/766b071f-fb29-43d1-be22-a261a8cb787c-kube-api-access-7q58j\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.228191 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-ttw96" event={"ID":"766b071f-fb29-43d1-be22-a261a8cb787c","Type":"ContainerDied","Data":"9955655998eafef48390b8b6ba9363999963a5ff70a2e2b35759d6adfde82a38"} Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.228235 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9955655998eafef48390b8b6ba9363999963a5ff70a2e2b35759d6adfde82a38" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.228347 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-ttw96" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439090 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-pg8vx"] Jan 31 07:06:09 crc kubenswrapper[4687]: E0131 07:06:09.439785 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerName="util" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439802 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerName="util" Jan 31 07:06:09 crc kubenswrapper[4687]: E0131 07:06:09.439811 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerName="pull" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439817 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerName="pull" Jan 31 07:06:09 crc kubenswrapper[4687]: E0131 07:06:09.439825 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766b071f-fb29-43d1-be22-a261a8cb787c" containerName="keystone-db-sync" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439832 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="766b071f-fb29-43d1-be22-a261a8cb787c" containerName="keystone-db-sync" Jan 31 07:06:09 crc kubenswrapper[4687]: E0131 07:06:09.439840 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerName="pull" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439845 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerName="pull" Jan 31 07:06:09 crc kubenswrapper[4687]: E0131 07:06:09.439857 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerName="extract" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439863 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerName="extract" Jan 31 07:06:09 crc kubenswrapper[4687]: E0131 07:06:09.439873 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerName="util" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439879 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerName="util" Jan 31 07:06:09 crc kubenswrapper[4687]: E0131 07:06:09.439888 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerName="extract" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439894 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerName="extract" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.439996 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6cf66be-126e-4ac2-ba8b-165628cd03e7" containerName="extract" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.440015 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8040a852-f1a4-420b-9897-a1c71c5b231c" containerName="extract" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.440023 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="766b071f-fb29-43d1-be22-a261a8cb787c" containerName="keystone-db-sync" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.440455 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.443971 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.444083 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.444115 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"osp-secret" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.444909 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.446054 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-9spm5" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.447653 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-pg8vx"] Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.568635 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-config-data\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.568702 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-scripts\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.568784 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxhp\" (UniqueName: \"kubernetes.io/projected/264870fa-efbf-41ea-9a90-6e154d696b02-kube-api-access-mfxhp\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.568859 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-credential-keys\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.569021 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-fernet-keys\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.670051 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-scripts\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.670185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxhp\" (UniqueName: \"kubernetes.io/projected/264870fa-efbf-41ea-9a90-6e154d696b02-kube-api-access-mfxhp\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.670458 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-credential-keys\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.670851 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-fernet-keys\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.670992 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-config-data\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.674562 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-credential-keys\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.674740 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-config-data\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.675136 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-fernet-keys\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.685783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-scripts\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.689730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxhp\" (UniqueName: \"kubernetes.io/projected/264870fa-efbf-41ea-9a90-6e154d696b02-kube-api-access-mfxhp\") pod \"keystone-bootstrap-pg8vx\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:09 crc kubenswrapper[4687]: I0131 07:06:09.755752 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:10 crc kubenswrapper[4687]: I0131 07:06:10.149988 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-pg8vx"] Jan 31 07:06:10 crc kubenswrapper[4687]: I0131 07:06:10.240560 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" event={"ID":"264870fa-efbf-41ea-9a90-6e154d696b02","Type":"ContainerStarted","Data":"383c87c32c38a98b8ff6fdfcbcaffe50eb5d53034dd0c60eb8cda5add81c3261"} Jan 31 07:06:11 crc kubenswrapper[4687]: I0131 07:06:11.252041 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" event={"ID":"264870fa-efbf-41ea-9a90-6e154d696b02","Type":"ContainerStarted","Data":"3e581ba7c64f41139008f517bbfeea52a5527209dc56d4aeb78e6e3256a7e59f"} Jan 31 07:06:11 crc kubenswrapper[4687]: I0131 07:06:11.272312 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" podStartSLOduration=2.272293629 podStartE2EDuration="2.272293629s" podCreationTimestamp="2026-01-31 07:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:06:11.269961596 +0000 UTC m=+1397.547221181" watchObservedRunningTime="2026-01-31 07:06:11.272293629 +0000 UTC m=+1397.549553214" Jan 31 07:06:17 crc kubenswrapper[4687]: I0131 07:06:17.292927 4687 generic.go:334] "Generic (PLEG): container finished" podID="264870fa-efbf-41ea-9a90-6e154d696b02" containerID="3e581ba7c64f41139008f517bbfeea52a5527209dc56d4aeb78e6e3256a7e59f" exitCode=0 Jan 31 07:06:17 crc kubenswrapper[4687]: I0131 07:06:17.293012 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" event={"ID":"264870fa-efbf-41ea-9a90-6e154d696b02","Type":"ContainerDied","Data":"3e581ba7c64f41139008f517bbfeea52a5527209dc56d4aeb78e6e3256a7e59f"} Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.557551 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.613510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-config-data\") pod \"264870fa-efbf-41ea-9a90-6e154d696b02\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.613617 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-fernet-keys\") pod \"264870fa-efbf-41ea-9a90-6e154d696b02\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.613719 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-scripts\") pod \"264870fa-efbf-41ea-9a90-6e154d696b02\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.613761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-credential-keys\") pod \"264870fa-efbf-41ea-9a90-6e154d696b02\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.613808 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfxhp\" (UniqueName: \"kubernetes.io/projected/264870fa-efbf-41ea-9a90-6e154d696b02-kube-api-access-mfxhp\") pod \"264870fa-efbf-41ea-9a90-6e154d696b02\" (UID: \"264870fa-efbf-41ea-9a90-6e154d696b02\") " Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.618810 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "264870fa-efbf-41ea-9a90-6e154d696b02" (UID: "264870fa-efbf-41ea-9a90-6e154d696b02"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.618846 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "264870fa-efbf-41ea-9a90-6e154d696b02" (UID: "264870fa-efbf-41ea-9a90-6e154d696b02"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.619155 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-scripts" (OuterVolumeSpecName: "scripts") pod "264870fa-efbf-41ea-9a90-6e154d696b02" (UID: "264870fa-efbf-41ea-9a90-6e154d696b02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.619263 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264870fa-efbf-41ea-9a90-6e154d696b02-kube-api-access-mfxhp" (OuterVolumeSpecName: "kube-api-access-mfxhp") pod "264870fa-efbf-41ea-9a90-6e154d696b02" (UID: "264870fa-efbf-41ea-9a90-6e154d696b02"). InnerVolumeSpecName "kube-api-access-mfxhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.631841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-config-data" (OuterVolumeSpecName: "config-data") pod "264870fa-efbf-41ea-9a90-6e154d696b02" (UID: "264870fa-efbf-41ea-9a90-6e154d696b02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.717168 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.717208 4687 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.717220 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfxhp\" (UniqueName: \"kubernetes.io/projected/264870fa-efbf-41ea-9a90-6e154d696b02-kube-api-access-mfxhp\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.717229 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:18 crc kubenswrapper[4687]: I0131 07:06:18.717241 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/264870fa-efbf-41ea-9a90-6e154d696b02-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.305398 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" event={"ID":"264870fa-efbf-41ea-9a90-6e154d696b02","Type":"ContainerDied","Data":"383c87c32c38a98b8ff6fdfcbcaffe50eb5d53034dd0c60eb8cda5add81c3261"} Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.305708 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="383c87c32c38a98b8ff6fdfcbcaffe50eb5d53034dd0c60eb8cda5add81c3261" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.305457 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-pg8vx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.397762 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-7f864d6549-bfflx"] Jan 31 07:06:19 crc kubenswrapper[4687]: E0131 07:06:19.398018 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264870fa-efbf-41ea-9a90-6e154d696b02" containerName="keystone-bootstrap" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.398029 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="264870fa-efbf-41ea-9a90-6e154d696b02" containerName="keystone-bootstrap" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.399775 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="264870fa-efbf-41ea-9a90-6e154d696b02" containerName="keystone-bootstrap" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.400582 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.407802 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.407935 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.408476 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.408603 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-9spm5" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.427320 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-fernet-keys\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.429836 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-7f864d6549-bfflx"] Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.429995 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-config-data\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.430054 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-scripts\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.430112 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwfm\" (UniqueName: \"kubernetes.io/projected/be44d699-42c9-4e7f-a533-8b39328ceedd-kube-api-access-hvwfm\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.430215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-credential-keys\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.531942 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwfm\" (UniqueName: \"kubernetes.io/projected/be44d699-42c9-4e7f-a533-8b39328ceedd-kube-api-access-hvwfm\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.532014 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-credential-keys\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.532045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-fernet-keys\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.532088 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-config-data\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.532110 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-scripts\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.535396 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-scripts\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.535678 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-credential-keys\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.535754 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-fernet-keys\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.536731 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-config-data\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.550436 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwfm\" (UniqueName: \"kubernetes.io/projected/be44d699-42c9-4e7f-a533-8b39328ceedd-kube-api-access-hvwfm\") pod \"keystone-7f864d6549-bfflx\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:19 crc kubenswrapper[4687]: I0131 07:06:19.731116 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:20 crc kubenswrapper[4687]: I0131 07:06:20.062147 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-7f864d6549-bfflx"] Jan 31 07:06:20 crc kubenswrapper[4687]: I0131 07:06:20.312617 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" event={"ID":"be44d699-42c9-4e7f-a533-8b39328ceedd","Type":"ContainerStarted","Data":"29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a"} Jan 31 07:06:20 crc kubenswrapper[4687]: I0131 07:06:20.312933 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" event={"ID":"be44d699-42c9-4e7f-a533-8b39328ceedd","Type":"ContainerStarted","Data":"838c6f70b7065b4ab23a891259428382176a70a51c4d7caa78e1e68b7500a812"} Jan 31 07:06:20 crc kubenswrapper[4687]: I0131 07:06:20.312956 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:20 crc kubenswrapper[4687]: I0131 07:06:20.326878 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" podStartSLOduration=1.326860682 podStartE2EDuration="1.326860682s" podCreationTimestamp="2026-01-31 07:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:06:20.325820374 +0000 UTC m=+1406.603079949" watchObservedRunningTime="2026-01-31 07:06:20.326860682 +0000 UTC m=+1406.604120277" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.553339 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54"] Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.554612 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.556627 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rq6sd" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.556891 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-service-cert" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.564959 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54"] Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.574662 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9baebd08-f9ca-4a8c-a12c-2609be678e5c-apiservice-cert\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.574723 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9baebd08-f9ca-4a8c-a12c-2609be678e5c-webhook-cert\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.574757 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkmm\" (UniqueName: \"kubernetes.io/projected/9baebd08-f9ca-4a8c-a12c-2609be678e5c-kube-api-access-xbkmm\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.676288 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9baebd08-f9ca-4a8c-a12c-2609be678e5c-apiservice-cert\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.676355 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9baebd08-f9ca-4a8c-a12c-2609be678e5c-webhook-cert\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.676384 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkmm\" (UniqueName: \"kubernetes.io/projected/9baebd08-f9ca-4a8c-a12c-2609be678e5c-kube-api-access-xbkmm\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.682636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9baebd08-f9ca-4a8c-a12c-2609be678e5c-apiservice-cert\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.683197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9baebd08-f9ca-4a8c-a12c-2609be678e5c-webhook-cert\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.692442 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkmm\" (UniqueName: \"kubernetes.io/projected/9baebd08-f9ca-4a8c-a12c-2609be678e5c-kube-api-access-xbkmm\") pod \"horizon-operator-controller-manager-847c44d56-p7g54\" (UID: \"9baebd08-f9ca-4a8c-a12c-2609be678e5c\") " pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:22 crc kubenswrapper[4687]: I0131 07:06:22.874103 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:23 crc kubenswrapper[4687]: I0131 07:06:23.278190 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54"] Jan 31 07:06:23 crc kubenswrapper[4687]: I0131 07:06:23.341357 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" event={"ID":"9baebd08-f9ca-4a8c-a12c-2609be678e5c","Type":"ContainerStarted","Data":"1c6d1e149d6ce9f56e0f6a79e018d34453a0d30ffe5c93fe0438e7a4371d23a9"} Jan 31 07:06:27 crc kubenswrapper[4687]: I0131 07:06:27.372474 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" event={"ID":"9baebd08-f9ca-4a8c-a12c-2609be678e5c","Type":"ContainerStarted","Data":"495aeb8e978b4cbf3d707fec451c9d2ad13fc2bdf8fe207476aedae27e3d2936"} Jan 31 07:06:27 crc kubenswrapper[4687]: I0131 07:06:27.373143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:27 crc kubenswrapper[4687]: I0131 07:06:27.399334 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" podStartSLOduration=2.172678634 podStartE2EDuration="5.399314279s" podCreationTimestamp="2026-01-31 07:06:22 +0000 UTC" firstStartedPulling="2026-01-31 07:06:23.29784434 +0000 UTC m=+1409.575107105" lastFinishedPulling="2026-01-31 07:06:26.524483175 +0000 UTC m=+1412.801742750" observedRunningTime="2026-01-31 07:06:27.396900373 +0000 UTC m=+1413.674159948" watchObservedRunningTime="2026-01-31 07:06:27.399314279 +0000 UTC m=+1413.676573864" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.360501 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5"] Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.361603 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.363875 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.364031 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-vr2wl" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.382885 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5"] Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.407519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-apiservice-cert\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.407610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-webhook-cert\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.407644 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bw8f\" (UniqueName: \"kubernetes.io/projected/e229e979-1176-4e84-9dab-1027aee52b34-kube-api-access-6bw8f\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.509063 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-webhook-cert\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.509122 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bw8f\" (UniqueName: \"kubernetes.io/projected/e229e979-1176-4e84-9dab-1027aee52b34-kube-api-access-6bw8f\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.509185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-apiservice-cert\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.516322 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-webhook-cert\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.516458 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-apiservice-cert\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.529315 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bw8f\" (UniqueName: \"kubernetes.io/projected/e229e979-1176-4e84-9dab-1027aee52b34-kube-api-access-6bw8f\") pod \"swift-operator-controller-manager-648b98dfd7-f6vp5\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:30 crc kubenswrapper[4687]: I0131 07:06:30.693044 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:31 crc kubenswrapper[4687]: I0131 07:06:31.348195 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5"] Jan 31 07:06:31 crc kubenswrapper[4687]: W0131 07:06:31.351734 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode229e979_1176_4e84_9dab_1027aee52b34.slice/crio-f1dfbb04b137d953d7dcb87137b305f1219fdaa6a1a779063d8de1984b77da47 WatchSource:0}: Error finding container f1dfbb04b137d953d7dcb87137b305f1219fdaa6a1a779063d8de1984b77da47: Status 404 returned error can't find the container with id f1dfbb04b137d953d7dcb87137b305f1219fdaa6a1a779063d8de1984b77da47 Jan 31 07:06:31 crc kubenswrapper[4687]: I0131 07:06:31.400209 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" event={"ID":"e229e979-1176-4e84-9dab-1027aee52b34","Type":"ContainerStarted","Data":"f1dfbb04b137d953d7dcb87137b305f1219fdaa6a1a779063d8de1984b77da47"} Jan 31 07:06:32 crc kubenswrapper[4687]: I0131 07:06:32.881249 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-847c44d56-p7g54" Jan 31 07:06:34 crc kubenswrapper[4687]: I0131 07:06:34.428463 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" event={"ID":"e229e979-1176-4e84-9dab-1027aee52b34","Type":"ContainerStarted","Data":"042f494a78d21700df8fb39607568af9066a7e2d66ad07dff7bfc862061b9adf"} Jan 31 07:06:34 crc kubenswrapper[4687]: I0131 07:06:34.428782 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:34 crc kubenswrapper[4687]: I0131 07:06:34.443469 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" podStartSLOduration=1.687362889 podStartE2EDuration="4.443451942s" podCreationTimestamp="2026-01-31 07:06:30 +0000 UTC" firstStartedPulling="2026-01-31 07:06:31.354439369 +0000 UTC m=+1417.631698944" lastFinishedPulling="2026-01-31 07:06:34.110528422 +0000 UTC m=+1420.387787997" observedRunningTime="2026-01-31 07:06:34.443334789 +0000 UTC m=+1420.720594394" watchObservedRunningTime="2026-01-31 07:06:34.443451942 +0000 UTC m=+1420.720711507" Jan 31 07:06:40 crc kubenswrapper[4687]: I0131 07:06:40.701045 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.416736 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.426538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.433309 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-files" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.433635 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-storage-config-data" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.433796 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-swift-dockercfg-2bc42" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.440084 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-conf" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.452264 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.472491 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-cdxqh"] Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.474018 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.486308 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-config-data" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.488986 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-scripts" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.490049 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-proxy-config-data" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.490684 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-cdxqh"] Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.529540 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-lock\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.529596 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ckk\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-kube-api-access-s5ckk\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.529648 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.529795 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-cache\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.529873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.631895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgrqn\" (UniqueName: \"kubernetes.io/projected/68acc278-6e5f-44d7-88ec-25ed80bda714-kube-api-access-hgrqn\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.631997 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-lock\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.632049 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5ckk\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-kube-api-access-s5ckk\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.632516 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-dispersionconf\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.632576 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: E0131 07:06:46.632711 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:06:46 crc kubenswrapper[4687]: E0131 07:06:46.632742 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 07:06:46 crc kubenswrapper[4687]: E0131 07:06:46.632797 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift podName:4f3169d5-4ca5-47e8-a6a4-b34705f30dd0 nodeName:}" failed. No retries permitted until 2026-01-31 07:06:47.132775523 +0000 UTC m=+1433.410035098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift") pod "swift-storage-0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0") : configmap "swift-ring-files" not found Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.632928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-lock\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.633007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-cache\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.633038 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-swiftconf\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.633097 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.633170 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-ring-data-devices\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.633238 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-scripts\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.633311 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68acc278-6e5f-44d7-88ec-25ed80bda714-etc-swift\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.633545 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.646015 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-cache\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.658938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.659283 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5ckk\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-kube-api-access-s5ckk\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.734433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-dispersionconf\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.734541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-swiftconf\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.734572 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-ring-data-devices\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.734595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-scripts\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.734626 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68acc278-6e5f-44d7-88ec-25ed80bda714-etc-swift\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.734671 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgrqn\" (UniqueName: \"kubernetes.io/projected/68acc278-6e5f-44d7-88ec-25ed80bda714-kube-api-access-hgrqn\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.735483 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68acc278-6e5f-44d7-88ec-25ed80bda714-etc-swift\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.735799 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-ring-data-devices\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.736052 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-scripts\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.738296 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-dispersionconf\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.738918 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-swiftconf\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.753290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgrqn\" (UniqueName: \"kubernetes.io/projected/68acc278-6e5f-44d7-88ec-25ed80bda714-kube-api-access-hgrqn\") pod \"swift-ring-rebalance-cdxqh\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:46 crc kubenswrapper[4687]: I0131 07:06:46.794825 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.149875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:47 crc kubenswrapper[4687]: E0131 07:06:47.150930 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:06:47 crc kubenswrapper[4687]: E0131 07:06:47.150977 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 07:06:47 crc kubenswrapper[4687]: E0131 07:06:47.151070 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift podName:4f3169d5-4ca5-47e8-a6a4-b34705f30dd0 nodeName:}" failed. No retries permitted until 2026-01-31 07:06:48.151039063 +0000 UTC m=+1434.428298638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift") pod "swift-storage-0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0") : configmap "swift-ring-files" not found Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.496756 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-cdxqh"] Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.548005 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" event={"ID":"68acc278-6e5f-44d7-88ec-25ed80bda714","Type":"ContainerStarted","Data":"5f40a43e1b60331d122e35e00c092506f914058877311a0358ea945116a95524"} Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.822936 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-xzj7m"] Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.823759 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-xzj7m" Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.826476 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-index-dockercfg-tp42w" Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.832376 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-xzj7m"] Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.872113 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7hh\" (UniqueName: \"kubernetes.io/projected/5c16d3a2-fd52-44b4-80ce-6bf3479ee45c-kube-api-access-rc7hh\") pod \"glance-operator-index-xzj7m\" (UID: \"5c16d3a2-fd52-44b4-80ce-6bf3479ee45c\") " pod="openstack-operators/glance-operator-index-xzj7m" Jan 31 07:06:47 crc kubenswrapper[4687]: I0131 07:06:47.974259 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7hh\" (UniqueName: \"kubernetes.io/projected/5c16d3a2-fd52-44b4-80ce-6bf3479ee45c-kube-api-access-rc7hh\") pod \"glance-operator-index-xzj7m\" (UID: \"5c16d3a2-fd52-44b4-80ce-6bf3479ee45c\") " pod="openstack-operators/glance-operator-index-xzj7m" Jan 31 07:06:48 crc kubenswrapper[4687]: I0131 07:06:48.003930 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7hh\" (UniqueName: \"kubernetes.io/projected/5c16d3a2-fd52-44b4-80ce-6bf3479ee45c-kube-api-access-rc7hh\") pod \"glance-operator-index-xzj7m\" (UID: \"5c16d3a2-fd52-44b4-80ce-6bf3479ee45c\") " pod="openstack-operators/glance-operator-index-xzj7m" Jan 31 07:06:48 crc kubenswrapper[4687]: I0131 07:06:48.142534 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-xzj7m" Jan 31 07:06:48 crc kubenswrapper[4687]: I0131 07:06:48.176889 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:48 crc kubenswrapper[4687]: E0131 07:06:48.177073 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:06:48 crc kubenswrapper[4687]: E0131 07:06:48.177094 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 07:06:48 crc kubenswrapper[4687]: E0131 07:06:48.177144 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift podName:4f3169d5-4ca5-47e8-a6a4-b34705f30dd0 nodeName:}" failed. No retries permitted until 2026-01-31 07:06:50.177128199 +0000 UTC m=+1436.454387774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift") pod "swift-storage-0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0") : configmap "swift-ring-files" not found Jan 31 07:06:48 crc kubenswrapper[4687]: I0131 07:06:48.663370 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-xzj7m"] Jan 31 07:06:49 crc kubenswrapper[4687]: I0131 07:06:49.687637 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-xzj7m" event={"ID":"5c16d3a2-fd52-44b4-80ce-6bf3479ee45c","Type":"ContainerStarted","Data":"76cba60103615a1df5061405c947a6b2e1fd017d21411f02f3bd3b092c8433d6"} Jan 31 07:06:50 crc kubenswrapper[4687]: I0131 07:06:50.410725 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:50 crc kubenswrapper[4687]: E0131 07:06:50.411005 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:06:50 crc kubenswrapper[4687]: E0131 07:06:50.411033 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 07:06:50 crc kubenswrapper[4687]: E0131 07:06:50.411105 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift podName:4f3169d5-4ca5-47e8-a6a4-b34705f30dd0 nodeName:}" failed. No retries permitted until 2026-01-31 07:06:54.411079663 +0000 UTC m=+1440.688339238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift") pod "swift-storage-0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0") : configmap "swift-ring-files" not found Jan 31 07:06:52 crc kubenswrapper[4687]: I0131 07:06:52.014804 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-xzj7m"] Jan 31 07:06:52 crc kubenswrapper[4687]: I0131 07:06:52.631594 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-h6w75"] Jan 31 07:06:52 crc kubenswrapper[4687]: I0131 07:06:52.633222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:06:52 crc kubenswrapper[4687]: I0131 07:06:52.678089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bpb7\" (UniqueName: \"kubernetes.io/projected/47d8e3aa-adce-49bd-8e29-a0adeea6009e-kube-api-access-4bpb7\") pod \"glance-operator-index-h6w75\" (UID: \"47d8e3aa-adce-49bd-8e29-a0adeea6009e\") " pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:06:52 crc kubenswrapper[4687]: I0131 07:06:52.779708 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bpb7\" (UniqueName: \"kubernetes.io/projected/47d8e3aa-adce-49bd-8e29-a0adeea6009e-kube-api-access-4bpb7\") pod \"glance-operator-index-h6w75\" (UID: \"47d8e3aa-adce-49bd-8e29-a0adeea6009e\") " pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:06:52 crc kubenswrapper[4687]: I0131 07:06:52.884384 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bpb7\" (UniqueName: \"kubernetes.io/projected/47d8e3aa-adce-49bd-8e29-a0adeea6009e-kube-api-access-4bpb7\") pod \"glance-operator-index-h6w75\" (UID: \"47d8e3aa-adce-49bd-8e29-a0adeea6009e\") " pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:06:52 crc kubenswrapper[4687]: I0131 07:06:52.963594 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:06:53 crc kubenswrapper[4687]: I0131 07:06:53.002628 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-h6w75"] Jan 31 07:06:53 crc kubenswrapper[4687]: I0131 07:06:53.669721 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:06:54 crc kubenswrapper[4687]: I0131 07:06:54.416275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:06:54 crc kubenswrapper[4687]: E0131 07:06:54.416399 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:06:54 crc kubenswrapper[4687]: E0131 07:06:54.416453 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 07:06:54 crc kubenswrapper[4687]: E0131 07:06:54.416515 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift podName:4f3169d5-4ca5-47e8-a6a4-b34705f30dd0 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:02.416492518 +0000 UTC m=+1448.693752103 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift") pod "swift-storage-0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0") : configmap "swift-ring-files" not found Jan 31 07:06:56 crc kubenswrapper[4687]: I0131 07:06:56.154762 4687 scope.go:117] "RemoveContainer" containerID="b407e989eb276fbf8fae861bc16d4e38db39a6e3a410ea78c829aa7f16c2245d" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.418800 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-f72hz"] Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.420500 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.443553 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-f72hz"] Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.556195 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b574508-eb1c-4b61-bc77-3878a38f36f3-config-data\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.556263 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.556298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltmj\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-kube-api-access-7ltmj\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.556333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-run-httpd\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.556357 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-log-httpd\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.660169 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-run-httpd\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.660205 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-log-httpd\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.660275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b574508-eb1c-4b61-bc77-3878a38f36f3-config-data\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.660375 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.660425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ltmj\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-kube-api-access-7ltmj\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: E0131 07:06:59.660543 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:06:59 crc kubenswrapper[4687]: E0131 07:06:59.660572 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-f72hz: configmap "swift-ring-files" not found Jan 31 07:06:59 crc kubenswrapper[4687]: E0131 07:06:59.660641 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift podName:3b574508-eb1c-4b61-bc77-3878a38f36f3 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:00.160614112 +0000 UTC m=+1446.437873737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift") pod "swift-proxy-6d699db77c-f72hz" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3") : configmap "swift-ring-files" not found Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.661451 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-run-httpd\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.661656 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-log-httpd\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.670376 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b574508-eb1c-4b61-bc77-3878a38f36f3-config-data\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:06:59 crc kubenswrapper[4687]: I0131 07:06:59.678943 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ltmj\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-kube-api-access-7ltmj\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:00 crc kubenswrapper[4687]: I0131 07:07:00.166667 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:00 crc kubenswrapper[4687]: E0131 07:07:00.167050 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:07:00 crc kubenswrapper[4687]: E0131 07:07:00.167077 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-f72hz: configmap "swift-ring-files" not found Jan 31 07:07:00 crc kubenswrapper[4687]: E0131 07:07:00.167128 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift podName:3b574508-eb1c-4b61-bc77-3878a38f36f3 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:01.167109911 +0000 UTC m=+1447.444369486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift") pod "swift-proxy-6d699db77c-f72hz" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3") : configmap "swift-ring-files" not found Jan 31 07:07:01 crc kubenswrapper[4687]: I0131 07:07:01.181322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:01 crc kubenswrapper[4687]: E0131 07:07:01.181567 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:07:01 crc kubenswrapper[4687]: E0131 07:07:01.181832 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-f72hz: configmap "swift-ring-files" not found Jan 31 07:07:01 crc kubenswrapper[4687]: E0131 07:07:01.181898 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift podName:3b574508-eb1c-4b61-bc77-3878a38f36f3 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:03.181879617 +0000 UTC m=+1449.459139192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift") pod "swift-proxy-6d699db77c-f72hz" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3") : configmap "swift-ring-files" not found Jan 31 07:07:02 crc kubenswrapper[4687]: I0131 07:07:02.502787 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:07:02 crc kubenswrapper[4687]: E0131 07:07:02.502955 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:07:02 crc kubenswrapper[4687]: E0131 07:07:02.502974 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 07:07:02 crc kubenswrapper[4687]: E0131 07:07:02.503039 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift podName:4f3169d5-4ca5-47e8-a6a4-b34705f30dd0 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:18.503019806 +0000 UTC m=+1464.780279381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift") pod "swift-storage-0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0") : configmap "swift-ring-files" not found Jan 31 07:07:03 crc kubenswrapper[4687]: I0131 07:07:03.016441 4687 scope.go:117] "RemoveContainer" containerID="ed7197029267da74212df66269585e3e693cff3034185bc992f02b144c82c8a8" Jan 31 07:07:03 crc kubenswrapper[4687]: E0131 07:07:03.154098 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45" Jan 31 07:07:03 crc kubenswrapper[4687]: E0131 07:07:03.154459 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:glance-kuttl-tests,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:c2a03fce-ea4c-4a0e-86e1-d3371d1bbef1,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgrqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-cdxqh_glance-kuttl-tests(68acc278-6e5f-44d7-88ec-25ed80bda714): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:07:03 crc kubenswrapper[4687]: E0131 07:07:03.155660 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" podUID="68acc278-6e5f-44d7-88ec-25ed80bda714" Jan 31 07:07:03 crc kubenswrapper[4687]: I0131 07:07:03.213196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:03 crc kubenswrapper[4687]: E0131 07:07:03.213493 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:07:03 crc kubenswrapper[4687]: E0131 07:07:03.213528 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-f72hz: configmap "swift-ring-files" not found Jan 31 07:07:03 crc kubenswrapper[4687]: E0131 07:07:03.213596 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift podName:3b574508-eb1c-4b61-bc77-3878a38f36f3 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:07.213573687 +0000 UTC m=+1453.490833262 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift") pod "swift-proxy-6d699db77c-f72hz" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3") : configmap "swift-ring-files" not found Jan 31 07:07:03 crc kubenswrapper[4687]: I0131 07:07:03.712933 4687 scope.go:117] "RemoveContainer" containerID="831449559cc17040d18bc47380a8cb26c7ef97a75eb1773fc2a63a66125acaf7" Jan 31 07:07:03 crc kubenswrapper[4687]: E0131 07:07:03.895977 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45\\\"\"" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" podUID="68acc278-6e5f-44d7-88ec-25ed80bda714" Jan 31 07:07:05 crc kubenswrapper[4687]: I0131 07:07:05.565049 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-h6w75"] Jan 31 07:07:05 crc kubenswrapper[4687]: W0131 07:07:05.613913 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47d8e3aa_adce_49bd_8e29_a0adeea6009e.slice/crio-dc04a9182c3fe270c2575db4f62b09ff1a5e0edd26f41c409efffb29bd4f204f WatchSource:0}: Error finding container dc04a9182c3fe270c2575db4f62b09ff1a5e0edd26f41c409efffb29bd4f204f: Status 404 returned error can't find the container with id dc04a9182c3fe270c2575db4f62b09ff1a5e0edd26f41c409efffb29bd4f204f Jan 31 07:07:05 crc kubenswrapper[4687]: I0131 07:07:05.912341 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-h6w75" event={"ID":"47d8e3aa-adce-49bd-8e29-a0adeea6009e","Type":"ContainerStarted","Data":"dc04a9182c3fe270c2575db4f62b09ff1a5e0edd26f41c409efffb29bd4f204f"} Jan 31 07:07:06 crc kubenswrapper[4687]: E0131 07:07:06.646928 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.156:5001/openstack-k8s-operators/glance-operator-index:91cca16f8744145aab97d8a109a611ffceba50d5" Jan 31 07:07:06 crc kubenswrapper[4687]: E0131 07:07:06.647316 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.156:5001/openstack-k8s-operators/glance-operator-index:91cca16f8744145aab97d8a109a611ffceba50d5" Jan 31 07:07:06 crc kubenswrapper[4687]: E0131 07:07:06.647518 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:38.102.83.156:5001/openstack-k8s-operators/glance-operator-index:91cca16f8744145aab97d8a109a611ffceba50d5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rc7hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-index-xzj7m_openstack-operators(5c16d3a2-fd52-44b4-80ce-6bf3479ee45c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:07:06 crc kubenswrapper[4687]: E0131 07:07:06.648914 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-index-xzj7m" podUID="5c16d3a2-fd52-44b4-80ce-6bf3479ee45c" Jan 31 07:07:06 crc kubenswrapper[4687]: I0131 07:07:06.920468 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-h6w75" event={"ID":"47d8e3aa-adce-49bd-8e29-a0adeea6009e","Type":"ContainerStarted","Data":"8d031d0a222d46ec2116b63d32a7056ffd2315cc8cb1ed1a26c67f9f74410faf"} Jan 31 07:07:06 crc kubenswrapper[4687]: I0131 07:07:06.940116 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-h6w75" podStartSLOduration=13.865933631 podStartE2EDuration="14.940101871s" podCreationTimestamp="2026-01-31 07:06:52 +0000 UTC" firstStartedPulling="2026-01-31 07:07:05.622616213 +0000 UTC m=+1451.899875828" lastFinishedPulling="2026-01-31 07:07:06.696784453 +0000 UTC m=+1452.974044068" observedRunningTime="2026-01-31 07:07:06.937685315 +0000 UTC m=+1453.214944890" watchObservedRunningTime="2026-01-31 07:07:06.940101871 +0000 UTC m=+1453.217361446" Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.280810 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:07 crc kubenswrapper[4687]: E0131 07:07:07.281159 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:07:07 crc kubenswrapper[4687]: E0131 07:07:07.281187 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-f72hz: configmap "swift-ring-files" not found Jan 31 07:07:07 crc kubenswrapper[4687]: E0131 07:07:07.281254 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift podName:3b574508-eb1c-4b61-bc77-3878a38f36f3 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:15.281225095 +0000 UTC m=+1461.558484670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift") pod "swift-proxy-6d699db77c-f72hz" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3") : configmap "swift-ring-files" not found Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.489758 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-xzj7m" Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.586610 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc7hh\" (UniqueName: \"kubernetes.io/projected/5c16d3a2-fd52-44b4-80ce-6bf3479ee45c-kube-api-access-rc7hh\") pod \"5c16d3a2-fd52-44b4-80ce-6bf3479ee45c\" (UID: \"5c16d3a2-fd52-44b4-80ce-6bf3479ee45c\") " Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.594663 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c16d3a2-fd52-44b4-80ce-6bf3479ee45c-kube-api-access-rc7hh" (OuterVolumeSpecName: "kube-api-access-rc7hh") pod "5c16d3a2-fd52-44b4-80ce-6bf3479ee45c" (UID: "5c16d3a2-fd52-44b4-80ce-6bf3479ee45c"). InnerVolumeSpecName "kube-api-access-rc7hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.688497 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc7hh\" (UniqueName: \"kubernetes.io/projected/5c16d3a2-fd52-44b4-80ce-6bf3479ee45c-kube-api-access-rc7hh\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.928235 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-xzj7m" Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.928312 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-xzj7m" event={"ID":"5c16d3a2-fd52-44b4-80ce-6bf3479ee45c","Type":"ContainerDied","Data":"76cba60103615a1df5061405c947a6b2e1fd017d21411f02f3bd3b092c8433d6"} Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.971068 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-xzj7m"] Jan 31 07:07:07 crc kubenswrapper[4687]: I0131 07:07:07.976886 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/glance-operator-index-xzj7m"] Jan 31 07:07:09 crc kubenswrapper[4687]: I0131 07:07:09.613159 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c16d3a2-fd52-44b4-80ce-6bf3479ee45c" path="/var/lib/kubelet/pods/5c16d3a2-fd52-44b4-80ce-6bf3479ee45c/volumes" Jan 31 07:07:12 crc kubenswrapper[4687]: I0131 07:07:12.963798 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:07:12 crc kubenswrapper[4687]: I0131 07:07:12.964191 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:07:12 crc kubenswrapper[4687]: I0131 07:07:12.993026 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:07:14 crc kubenswrapper[4687]: I0131 07:07:14.026125 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:07:15 crc kubenswrapper[4687]: I0131 07:07:15.298062 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:15 crc kubenswrapper[4687]: E0131 07:07:15.298267 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:07:15 crc kubenswrapper[4687]: E0131 07:07:15.298297 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-f72hz: configmap "swift-ring-files" not found Jan 31 07:07:15 crc kubenswrapper[4687]: E0131 07:07:15.298368 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift podName:3b574508-eb1c-4b61-bc77-3878a38f36f3 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:31.298347736 +0000 UTC m=+1477.575607311 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift") pod "swift-proxy-6d699db77c-f72hz" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3") : configmap "swift-ring-files" not found Jan 31 07:07:18 crc kubenswrapper[4687]: I0131 07:07:18.002680 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" event={"ID":"68acc278-6e5f-44d7-88ec-25ed80bda714","Type":"ContainerStarted","Data":"1ad6b47970d554bb8de23733521e8dc86ef8a4c06cccf8798956f3d26d565031"} Jan 31 07:07:18 crc kubenswrapper[4687]: I0131 07:07:18.018885 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" podStartSLOduration=1.978563426 podStartE2EDuration="32.018865433s" podCreationTimestamp="2026-01-31 07:06:46 +0000 UTC" firstStartedPulling="2026-01-31 07:06:47.499724444 +0000 UTC m=+1433.776984019" lastFinishedPulling="2026-01-31 07:07:17.540026451 +0000 UTC m=+1463.817286026" observedRunningTime="2026-01-31 07:07:18.017852156 +0000 UTC m=+1464.295111751" watchObservedRunningTime="2026-01-31 07:07:18.018865433 +0000 UTC m=+1464.296124998" Jan 31 07:07:18 crc kubenswrapper[4687]: I0131 07:07:18.543274 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:07:18 crc kubenswrapper[4687]: E0131 07:07:18.543488 4687 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 07:07:18 crc kubenswrapper[4687]: E0131 07:07:18.543507 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 07:07:18 crc kubenswrapper[4687]: E0131 07:07:18.543551 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift podName:4f3169d5-4ca5-47e8-a6a4-b34705f30dd0 nodeName:}" failed. No retries permitted until 2026-01-31 07:07:50.543534839 +0000 UTC m=+1496.820794414 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift") pod "swift-storage-0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0") : configmap "swift-ring-files" not found Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.659489 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp"] Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.660946 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.662752 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9sffv" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.673486 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp"] Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.826950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-bundle\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.826994 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbv9\" (UniqueName: \"kubernetes.io/projected/1d29e2c7-9c78-4903-938a-8feed8644190-kube-api-access-tgbv9\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.827072 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-util\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.928229 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-util\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.928365 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-bundle\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.928429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbv9\" (UniqueName: \"kubernetes.io/projected/1d29e2c7-9c78-4903-938a-8feed8644190-kube-api-access-tgbv9\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.929378 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-util\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.929391 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-bundle\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.954803 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbv9\" (UniqueName: \"kubernetes.io/projected/1d29e2c7-9c78-4903-938a-8feed8644190-kube-api-access-tgbv9\") pod \"f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:21 crc kubenswrapper[4687]: I0131 07:07:21.979948 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:22 crc kubenswrapper[4687]: I0131 07:07:22.649371 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp"] Jan 31 07:07:22 crc kubenswrapper[4687]: W0131 07:07:22.652923 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d29e2c7_9c78_4903_938a_8feed8644190.slice/crio-a2d57059560addce72c5b5a6c8665b92d278b6395f93872051d3e5b3e9ba6281 WatchSource:0}: Error finding container a2d57059560addce72c5b5a6c8665b92d278b6395f93872051d3e5b3e9ba6281: Status 404 returned error can't find the container with id a2d57059560addce72c5b5a6c8665b92d278b6395f93872051d3e5b3e9ba6281 Jan 31 07:07:23 crc kubenswrapper[4687]: I0131 07:07:23.039235 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" event={"ID":"1d29e2c7-9c78-4903-938a-8feed8644190","Type":"ContainerStarted","Data":"86633f3ea008c8a5db815b52a02c61285b1779f25c9c1cca6ebd20c265f01ff9"} Jan 31 07:07:23 crc kubenswrapper[4687]: I0131 07:07:23.039285 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" event={"ID":"1d29e2c7-9c78-4903-938a-8feed8644190","Type":"ContainerStarted","Data":"a2d57059560addce72c5b5a6c8665b92d278b6395f93872051d3e5b3e9ba6281"} Jan 31 07:07:24 crc kubenswrapper[4687]: I0131 07:07:24.047126 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d29e2c7-9c78-4903-938a-8feed8644190" containerID="86633f3ea008c8a5db815b52a02c61285b1779f25c9c1cca6ebd20c265f01ff9" exitCode=0 Jan 31 07:07:24 crc kubenswrapper[4687]: I0131 07:07:24.048462 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" event={"ID":"1d29e2c7-9c78-4903-938a-8feed8644190","Type":"ContainerDied","Data":"86633f3ea008c8a5db815b52a02c61285b1779f25c9c1cca6ebd20c265f01ff9"} Jan 31 07:07:25 crc kubenswrapper[4687]: I0131 07:07:25.063712 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d29e2c7-9c78-4903-938a-8feed8644190" containerID="884427cf10f24ae8fde8b7a03cb7c0e32b59f6c75ebf880e7417330619486825" exitCode=0 Jan 31 07:07:25 crc kubenswrapper[4687]: I0131 07:07:25.063842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" event={"ID":"1d29e2c7-9c78-4903-938a-8feed8644190","Type":"ContainerDied","Data":"884427cf10f24ae8fde8b7a03cb7c0e32b59f6c75ebf880e7417330619486825"} Jan 31 07:07:25 crc kubenswrapper[4687]: I0131 07:07:25.070781 4687 generic.go:334] "Generic (PLEG): container finished" podID="68acc278-6e5f-44d7-88ec-25ed80bda714" containerID="1ad6b47970d554bb8de23733521e8dc86ef8a4c06cccf8798956f3d26d565031" exitCode=0 Jan 31 07:07:25 crc kubenswrapper[4687]: I0131 07:07:25.070828 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" event={"ID":"68acc278-6e5f-44d7-88ec-25ed80bda714","Type":"ContainerDied","Data":"1ad6b47970d554bb8de23733521e8dc86ef8a4c06cccf8798956f3d26d565031"} Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.078794 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d29e2c7-9c78-4903-938a-8feed8644190" containerID="4db508992b773dc1480fd79bf37f830fce67e47dfba2db6ff3e9ffc433880836" exitCode=0 Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.078831 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" event={"ID":"1d29e2c7-9c78-4903-938a-8feed8644190","Type":"ContainerDied","Data":"4db508992b773dc1480fd79bf37f830fce67e47dfba2db6ff3e9ffc433880836"} Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.366839 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.426014 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-dispersionconf\") pod \"68acc278-6e5f-44d7-88ec-25ed80bda714\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.426050 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-scripts\") pod \"68acc278-6e5f-44d7-88ec-25ed80bda714\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.426123 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-swiftconf\") pod \"68acc278-6e5f-44d7-88ec-25ed80bda714\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.426143 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68acc278-6e5f-44d7-88ec-25ed80bda714-etc-swift\") pod \"68acc278-6e5f-44d7-88ec-25ed80bda714\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.426185 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgrqn\" (UniqueName: \"kubernetes.io/projected/68acc278-6e5f-44d7-88ec-25ed80bda714-kube-api-access-hgrqn\") pod \"68acc278-6e5f-44d7-88ec-25ed80bda714\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.426219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-ring-data-devices\") pod \"68acc278-6e5f-44d7-88ec-25ed80bda714\" (UID: \"68acc278-6e5f-44d7-88ec-25ed80bda714\") " Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.427296 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "68acc278-6e5f-44d7-88ec-25ed80bda714" (UID: "68acc278-6e5f-44d7-88ec-25ed80bda714"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.430600 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68acc278-6e5f-44d7-88ec-25ed80bda714-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "68acc278-6e5f-44d7-88ec-25ed80bda714" (UID: "68acc278-6e5f-44d7-88ec-25ed80bda714"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.435200 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68acc278-6e5f-44d7-88ec-25ed80bda714-kube-api-access-hgrqn" (OuterVolumeSpecName: "kube-api-access-hgrqn") pod "68acc278-6e5f-44d7-88ec-25ed80bda714" (UID: "68acc278-6e5f-44d7-88ec-25ed80bda714"). InnerVolumeSpecName "kube-api-access-hgrqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.449723 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "68acc278-6e5f-44d7-88ec-25ed80bda714" (UID: "68acc278-6e5f-44d7-88ec-25ed80bda714"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.450027 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-scripts" (OuterVolumeSpecName: "scripts") pod "68acc278-6e5f-44d7-88ec-25ed80bda714" (UID: "68acc278-6e5f-44d7-88ec-25ed80bda714"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.455845 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "68acc278-6e5f-44d7-88ec-25ed80bda714" (UID: "68acc278-6e5f-44d7-88ec-25ed80bda714"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.528193 4687 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.528497 4687 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68acc278-6e5f-44d7-88ec-25ed80bda714-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.528507 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgrqn\" (UniqueName: \"kubernetes.io/projected/68acc278-6e5f-44d7-88ec-25ed80bda714-kube-api-access-hgrqn\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.528516 4687 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.528525 4687 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68acc278-6e5f-44d7-88ec-25ed80bda714-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:26 crc kubenswrapper[4687]: I0131 07:07:26.528533 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68acc278-6e5f-44d7-88ec-25ed80bda714-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.088356 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.088372 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-cdxqh" event={"ID":"68acc278-6e5f-44d7-88ec-25ed80bda714","Type":"ContainerDied","Data":"5f40a43e1b60331d122e35e00c092506f914058877311a0358ea945116a95524"} Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.088462 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f40a43e1b60331d122e35e00c092506f914058877311a0358ea945116a95524" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.340638 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.440477 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgbv9\" (UniqueName: \"kubernetes.io/projected/1d29e2c7-9c78-4903-938a-8feed8644190-kube-api-access-tgbv9\") pod \"1d29e2c7-9c78-4903-938a-8feed8644190\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.440586 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-bundle\") pod \"1d29e2c7-9c78-4903-938a-8feed8644190\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.440717 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-util\") pod \"1d29e2c7-9c78-4903-938a-8feed8644190\" (UID: \"1d29e2c7-9c78-4903-938a-8feed8644190\") " Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.441517 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-bundle" (OuterVolumeSpecName: "bundle") pod "1d29e2c7-9c78-4903-938a-8feed8644190" (UID: "1d29e2c7-9c78-4903-938a-8feed8644190"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.445218 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d29e2c7-9c78-4903-938a-8feed8644190-kube-api-access-tgbv9" (OuterVolumeSpecName: "kube-api-access-tgbv9") pod "1d29e2c7-9c78-4903-938a-8feed8644190" (UID: "1d29e2c7-9c78-4903-938a-8feed8644190"). InnerVolumeSpecName "kube-api-access-tgbv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.453556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-util" (OuterVolumeSpecName: "util") pod "1d29e2c7-9c78-4903-938a-8feed8644190" (UID: "1d29e2c7-9c78-4903-938a-8feed8644190"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.554153 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.554509 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d29e2c7-9c78-4903-938a-8feed8644190-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:27 crc kubenswrapper[4687]: I0131 07:07:27.554532 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgbv9\" (UniqueName: \"kubernetes.io/projected/1d29e2c7-9c78-4903-938a-8feed8644190-kube-api-access-tgbv9\") on node \"crc\" DevicePath \"\"" Jan 31 07:07:28 crc kubenswrapper[4687]: I0131 07:07:28.097559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" event={"ID":"1d29e2c7-9c78-4903-938a-8feed8644190","Type":"ContainerDied","Data":"a2d57059560addce72c5b5a6c8665b92d278b6395f93872051d3e5b3e9ba6281"} Jan 31 07:07:28 crc kubenswrapper[4687]: I0131 07:07:28.097603 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp" Jan 31 07:07:28 crc kubenswrapper[4687]: I0131 07:07:28.097609 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d57059560addce72c5b5a6c8665b92d278b6395f93872051d3e5b3e9ba6281" Jan 31 07:07:28 crc kubenswrapper[4687]: I0131 07:07:28.684586 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:07:28 crc kubenswrapper[4687]: I0131 07:07:28.684918 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:07:31 crc kubenswrapper[4687]: I0131 07:07:31.307881 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:31 crc kubenswrapper[4687]: I0131 07:07:31.319881 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"swift-proxy-6d699db77c-f72hz\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:31 crc kubenswrapper[4687]: I0131 07:07:31.541391 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:31 crc kubenswrapper[4687]: I0131 07:07:31.947730 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-f72hz"] Jan 31 07:07:32 crc kubenswrapper[4687]: I0131 07:07:32.125018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" event={"ID":"3b574508-eb1c-4b61-bc77-3878a38f36f3","Type":"ContainerStarted","Data":"760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f"} Jan 31 07:07:32 crc kubenswrapper[4687]: I0131 07:07:32.125293 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" event={"ID":"3b574508-eb1c-4b61-bc77-3878a38f36f3","Type":"ContainerStarted","Data":"7e13435e423dd8ab2fb232fc66d1b74519ffa22cdb10a3857de92b9910fd1794"} Jan 31 07:07:33 crc kubenswrapper[4687]: I0131 07:07:33.134396 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" event={"ID":"3b574508-eb1c-4b61-bc77-3878a38f36f3","Type":"ContainerStarted","Data":"55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6"} Jan 31 07:07:33 crc kubenswrapper[4687]: I0131 07:07:33.135754 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:33 crc kubenswrapper[4687]: I0131 07:07:33.135797 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:33 crc kubenswrapper[4687]: I0131 07:07:33.155662 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" podStartSLOduration=34.155640979 podStartE2EDuration="34.155640979s" podCreationTimestamp="2026-01-31 07:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:07:33.151379112 +0000 UTC m=+1479.428638697" watchObservedRunningTime="2026-01-31 07:07:33.155640979 +0000 UTC m=+1479.432900554" Jan 31 07:07:41 crc kubenswrapper[4687]: I0131 07:07:41.547979 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:41 crc kubenswrapper[4687]: I0131 07:07:41.549525 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.546682 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz"] Jan 31 07:07:46 crc kubenswrapper[4687]: E0131 07:07:46.547394 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d29e2c7-9c78-4903-938a-8feed8644190" containerName="util" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.547421 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d29e2c7-9c78-4903-938a-8feed8644190" containerName="util" Jan 31 07:07:46 crc kubenswrapper[4687]: E0131 07:07:46.547434 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d29e2c7-9c78-4903-938a-8feed8644190" containerName="extract" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.547439 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d29e2c7-9c78-4903-938a-8feed8644190" containerName="extract" Jan 31 07:07:46 crc kubenswrapper[4687]: E0131 07:07:46.547453 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68acc278-6e5f-44d7-88ec-25ed80bda714" containerName="swift-ring-rebalance" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.547461 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="68acc278-6e5f-44d7-88ec-25ed80bda714" containerName="swift-ring-rebalance" Jan 31 07:07:46 crc kubenswrapper[4687]: E0131 07:07:46.547472 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d29e2c7-9c78-4903-938a-8feed8644190" containerName="pull" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.547477 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d29e2c7-9c78-4903-938a-8feed8644190" containerName="pull" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.547613 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="68acc278-6e5f-44d7-88ec-25ed80bda714" containerName="swift-ring-rebalance" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.547621 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d29e2c7-9c78-4903-938a-8feed8644190" containerName="extract" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.548071 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.551901 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8cnpd" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.553753 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-service-cert" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.559708 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz"] Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.672724 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-webhook-cert\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.673616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpc7l\" (UniqueName: \"kubernetes.io/projected/f6787f12-c3f6-4611-b5b0-1b26155d4d41-kube-api-access-gpc7l\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.673683 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-apiservice-cert\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.775776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-webhook-cert\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.775873 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpc7l\" (UniqueName: \"kubernetes.io/projected/f6787f12-c3f6-4611-b5b0-1b26155d4d41-kube-api-access-gpc7l\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.775914 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-apiservice-cert\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.781595 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-apiservice-cert\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.783378 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-webhook-cert\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.793723 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpc7l\" (UniqueName: \"kubernetes.io/projected/f6787f12-c3f6-4611-b5b0-1b26155d4d41-kube-api-access-gpc7l\") pod \"glance-operator-controller-manager-66ccc6f9f9-68gsz\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:46 crc kubenswrapper[4687]: I0131 07:07:46.901466 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:47 crc kubenswrapper[4687]: I0131 07:07:47.355888 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz"] Jan 31 07:07:47 crc kubenswrapper[4687]: W0131 07:07:47.359160 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6787f12_c3f6_4611_b5b0_1b26155d4d41.slice/crio-12f814a7f9d4dc47da3e7c033411ad7bfc305469ded99ea668c3de867e2237ff WatchSource:0}: Error finding container 12f814a7f9d4dc47da3e7c033411ad7bfc305469ded99ea668c3de867e2237ff: Status 404 returned error can't find the container with id 12f814a7f9d4dc47da3e7c033411ad7bfc305469ded99ea668c3de867e2237ff Jan 31 07:07:47 crc kubenswrapper[4687]: I0131 07:07:47.955377 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" event={"ID":"f6787f12-c3f6-4611-b5b0-1b26155d4d41","Type":"ContainerStarted","Data":"12f814a7f9d4dc47da3e7c033411ad7bfc305469ded99ea668c3de867e2237ff"} Jan 31 07:07:48 crc kubenswrapper[4687]: I0131 07:07:48.964587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" event={"ID":"f6787f12-c3f6-4611-b5b0-1b26155d4d41","Type":"ContainerStarted","Data":"33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982"} Jan 31 07:07:48 crc kubenswrapper[4687]: I0131 07:07:48.964984 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:48 crc kubenswrapper[4687]: I0131 07:07:48.985586 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" podStartSLOduration=1.628486828 podStartE2EDuration="2.9855673s" podCreationTimestamp="2026-01-31 07:07:46 +0000 UTC" firstStartedPulling="2026-01-31 07:07:47.36139494 +0000 UTC m=+1493.638654515" lastFinishedPulling="2026-01-31 07:07:48.718475412 +0000 UTC m=+1494.995734987" observedRunningTime="2026-01-31 07:07:48.980021618 +0000 UTC m=+1495.257281193" watchObservedRunningTime="2026-01-31 07:07:48.9855673 +0000 UTC m=+1495.262826875" Jan 31 07:07:50 crc kubenswrapper[4687]: I0131 07:07:50.556711 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:07:50 crc kubenswrapper[4687]: I0131 07:07:50.564101 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"swift-storage-0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:07:50 crc kubenswrapper[4687]: I0131 07:07:50.666050 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:07:51 crc kubenswrapper[4687]: W0131 07:07:51.456360 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f3169d5_4ca5_47e8_a6a4_b34705f30dd0.slice/crio-6b55cba56e12adbea9787c4e6c7f8b2a1f18b60750f0b59439ea298fede50957 WatchSource:0}: Error finding container 6b55cba56e12adbea9787c4e6c7f8b2a1f18b60750f0b59439ea298fede50957: Status 404 returned error can't find the container with id 6b55cba56e12adbea9787c4e6c7f8b2a1f18b60750f0b59439ea298fede50957 Jan 31 07:07:51 crc kubenswrapper[4687]: I0131 07:07:51.459004 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 07:07:51 crc kubenswrapper[4687]: I0131 07:07:51.986104 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"6b55cba56e12adbea9787c4e6c7f8b2a1f18b60750f0b59439ea298fede50957"} Jan 31 07:07:53 crc kubenswrapper[4687]: I0131 07:07:53.343516 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"502b54aa63f153278d1af53d6e2ef57ee86668bc1ca4b9331e43f7e1d8fcdd51"} Jan 31 07:07:53 crc kubenswrapper[4687]: I0131 07:07:53.343740 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"57255eff28aadc0f504b048b696e5785a65bddda1c04167b42793b0ae630f5f8"} Jan 31 07:07:54 crc kubenswrapper[4687]: I0131 07:07:54.420470 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"dc059a4299aaa5e0039676b11749b1ff11d523783abb720b1db4fca1b57d8a02"} Jan 31 07:07:54 crc kubenswrapper[4687]: I0131 07:07:54.420785 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"30c8a9046e479dd3d4719b5b38bd785ecc1a69005467729281cf8324e096a6d8"} Jan 31 07:07:56 crc kubenswrapper[4687]: I0131 07:07:56.683273 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"3ab4ab844783fa31daf1c1eed13d6cad654b268a5cebed800beb83b2b4076a10"} Jan 31 07:07:56 crc kubenswrapper[4687]: I0131 07:07:56.684167 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"1de988ae783d7ef322b32e03cec233e8d6a73b90c66b17400298df3da2c6bba3"} Jan 31 07:07:56 crc kubenswrapper[4687]: I0131 07:07:56.684213 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"250db73b99466a6d136c29b5ddb443fea1455c9b3f051000bc5c30d2a3dcac0d"} Jan 31 07:07:56 crc kubenswrapper[4687]: I0131 07:07:56.939613 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:07:57 crc kubenswrapper[4687]: I0131 07:07:57.695632 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"29971351b38387c34c20fe50e6de67979f4bc9723a1be93feef1492db50a6d31"} Jan 31 07:07:58 crc kubenswrapper[4687]: I0131 07:07:58.686501 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:07:58 crc kubenswrapper[4687]: I0131 07:07:58.686584 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:07:59 crc kubenswrapper[4687]: I0131 07:07:59.908691 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"462d03384382a6f3fb4523829751723bfeacf1bcf107bf6627d59de69d3cc69c"} Jan 31 07:07:59 crc kubenswrapper[4687]: I0131 07:07:59.909015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"067116e8aa6dadfeb22d2c041ee5c818ebc935d4f59ceeefd77867071352b8cb"} Jan 31 07:07:59 crc kubenswrapper[4687]: I0131 07:07:59.909034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"829eb8a3a323c6c98f85abad5a6e6c8ae17563e61b17350c95f76c0df7a70f82"} Jan 31 07:07:59 crc kubenswrapper[4687]: I0131 07:07:59.909050 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"07418b09ea9b43e2f4b1393bd07f96ae9987062bed63bf2dcc8bd66e1db90bc0"} Jan 31 07:08:00 crc kubenswrapper[4687]: I0131 07:08:00.927863 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"3769f301e625ab3cce3a06cc29e9d5f5bb2ae84bd6b08ca2cb7bb3f7aabb6511"} Jan 31 07:08:00 crc kubenswrapper[4687]: I0131 07:08:00.927952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"087709f07a16a8956cad97cec775636bfa983adaa6627cebd8289db5e77fc582"} Jan 31 07:08:01 crc kubenswrapper[4687]: I0131 07:08:01.941721 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerStarted","Data":"87120a710046f2e75116a16c4179bf49847f21569c6c405cde1ad7b2f9011407"} Jan 31 07:08:01 crc kubenswrapper[4687]: I0131 07:08:01.986610 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-storage-0" podStartSLOduration=69.585859353 podStartE2EDuration="1m16.986581139s" podCreationTimestamp="2026-01-31 07:06:45 +0000 UTC" firstStartedPulling="2026-01-31 07:07:51.457922468 +0000 UTC m=+1497.735182043" lastFinishedPulling="2026-01-31 07:07:58.858644254 +0000 UTC m=+1505.135903829" observedRunningTime="2026-01-31 07:08:01.978456736 +0000 UTC m=+1508.255716321" watchObservedRunningTime="2026-01-31 07:08:01.986581139 +0000 UTC m=+1508.263840754" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.784286 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-b2ld2"] Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.785587 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.796040 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-b2ld2"] Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.881357 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-8363-account-create-update-9htwj"] Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.882139 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.885710 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.897666 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-8363-account-create-update-9htwj"] Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.921155 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.922369 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.925697 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.927264 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-vwlv6" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.927608 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.927762 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.937946 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.939919 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072d38f-e7e3-4f83-a1d0-9220fabfd685-operator-scripts\") pod \"glance-db-create-b2ld2\" (UID: \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\") " pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:02 crc kubenswrapper[4687]: I0131 07:08:02.939956 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms92v\" (UniqueName: \"kubernetes.io/projected/5072d38f-e7e3-4f83-a1d0-9220fabfd685-kube-api-access-ms92v\") pod \"glance-db-create-b2ld2\" (UID: \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\") " pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.040866 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072d38f-e7e3-4f83-a1d0-9220fabfd685-operator-scripts\") pod \"glance-db-create-b2ld2\" (UID: \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\") " pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.040907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms92v\" (UniqueName: \"kubernetes.io/projected/5072d38f-e7e3-4f83-a1d0-9220fabfd685-kube-api-access-ms92v\") pod \"glance-db-create-b2ld2\" (UID: \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\") " pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.040930 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-scripts\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.040969 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.040986 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbvb\" (UniqueName: \"kubernetes.io/projected/16bf7b67-a057-4dcc-8c5d-8879e73a2932-kube-api-access-mjbvb\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.041056 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04701a36-4402-409c-86fb-4d4240226b7b-operator-scripts\") pod \"glance-8363-account-create-update-9htwj\" (UID: \"04701a36-4402-409c-86fb-4d4240226b7b\") " pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.041084 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config-secret\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.041110 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flqw\" (UniqueName: \"kubernetes.io/projected/04701a36-4402-409c-86fb-4d4240226b7b-kube-api-access-4flqw\") pod \"glance-8363-account-create-update-9htwj\" (UID: \"04701a36-4402-409c-86fb-4d4240226b7b\") " pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.042019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072d38f-e7e3-4f83-a1d0-9220fabfd685-operator-scripts\") pod \"glance-db-create-b2ld2\" (UID: \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\") " pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.059615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms92v\" (UniqueName: \"kubernetes.io/projected/5072d38f-e7e3-4f83-a1d0-9220fabfd685-kube-api-access-ms92v\") pod \"glance-db-create-b2ld2\" (UID: \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\") " pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.103011 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.142591 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04701a36-4402-409c-86fb-4d4240226b7b-operator-scripts\") pod \"glance-8363-account-create-update-9htwj\" (UID: \"04701a36-4402-409c-86fb-4d4240226b7b\") " pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.142667 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config-secret\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.142715 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4flqw\" (UniqueName: \"kubernetes.io/projected/04701a36-4402-409c-86fb-4d4240226b7b-kube-api-access-4flqw\") pod \"glance-8363-account-create-update-9htwj\" (UID: \"04701a36-4402-409c-86fb-4d4240226b7b\") " pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.142754 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-scripts\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.142790 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.142815 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbvb\" (UniqueName: \"kubernetes.io/projected/16bf7b67-a057-4dcc-8c5d-8879e73a2932-kube-api-access-mjbvb\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.144694 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-scripts\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.144847 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.144906 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04701a36-4402-409c-86fb-4d4240226b7b-operator-scripts\") pod \"glance-8363-account-create-update-9htwj\" (UID: \"04701a36-4402-409c-86fb-4d4240226b7b\") " pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.148033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config-secret\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.161178 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4flqw\" (UniqueName: \"kubernetes.io/projected/04701a36-4402-409c-86fb-4d4240226b7b-kube-api-access-4flqw\") pod \"glance-8363-account-create-update-9htwj\" (UID: \"04701a36-4402-409c-86fb-4d4240226b7b\") " pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.165850 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbvb\" (UniqueName: \"kubernetes.io/projected/16bf7b67-a057-4dcc-8c5d-8879e73a2932-kube-api-access-mjbvb\") pod \"openstackclient\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.213670 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:03 crc kubenswrapper[4687]: I0131 07:08:03.245786 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 07:08:04 crc kubenswrapper[4687]: I0131 07:08:04.592669 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-b2ld2"] Jan 31 07:08:04 crc kubenswrapper[4687]: I0131 07:08:04.951591 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-8363-account-create-update-9htwj"] Jan 31 07:08:05 crc kubenswrapper[4687]: I0131 07:08:05.078626 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:08:05 crc kubenswrapper[4687]: W0131 07:08:05.092778 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16bf7b67_a057_4dcc_8c5d_8879e73a2932.slice/crio-2d17c8c7766c012b3605a8538cb01d2f0363d038ed3d84c9d60826de1e23a80f WatchSource:0}: Error finding container 2d17c8c7766c012b3605a8538cb01d2f0363d038ed3d84c9d60826de1e23a80f: Status 404 returned error can't find the container with id 2d17c8c7766c012b3605a8538cb01d2f0363d038ed3d84c9d60826de1e23a80f Jan 31 07:08:05 crc kubenswrapper[4687]: I0131 07:08:05.494023 4687 generic.go:334] "Generic (PLEG): container finished" podID="5072d38f-e7e3-4f83-a1d0-9220fabfd685" containerID="187adf814d4cf77a90d93aee991fe42fee11395e53bf29c5b943d6964fffd080" exitCode=0 Jan 31 07:08:05 crc kubenswrapper[4687]: I0131 07:08:05.494089 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b2ld2" event={"ID":"5072d38f-e7e3-4f83-a1d0-9220fabfd685","Type":"ContainerDied","Data":"187adf814d4cf77a90d93aee991fe42fee11395e53bf29c5b943d6964fffd080"} Jan 31 07:08:05 crc kubenswrapper[4687]: I0131 07:08:05.494442 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b2ld2" event={"ID":"5072d38f-e7e3-4f83-a1d0-9220fabfd685","Type":"ContainerStarted","Data":"4e92e40b7583948fa0c6e9847b3896eba1f453ef728ce73ba0d27899076c216f"} Jan 31 07:08:05 crc kubenswrapper[4687]: I0131 07:08:05.495856 4687 generic.go:334] "Generic (PLEG): container finished" podID="04701a36-4402-409c-86fb-4d4240226b7b" containerID="3aeffc916158a4595c408f8e8d60856618b65d4f07e521145895c853299dc813" exitCode=0 Jan 31 07:08:05 crc kubenswrapper[4687]: I0131 07:08:05.495890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" event={"ID":"04701a36-4402-409c-86fb-4d4240226b7b","Type":"ContainerDied","Data":"3aeffc916158a4595c408f8e8d60856618b65d4f07e521145895c853299dc813"} Jan 31 07:08:05 crc kubenswrapper[4687]: I0131 07:08:05.495915 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" event={"ID":"04701a36-4402-409c-86fb-4d4240226b7b","Type":"ContainerStarted","Data":"a5afb2ed98da902dd737462c1ac567810eb69265a7675623689d9da30d23bae8"} Jan 31 07:08:05 crc kubenswrapper[4687]: I0131 07:08:05.497866 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"16bf7b67-a057-4dcc-8c5d-8879e73a2932","Type":"ContainerStarted","Data":"2d17c8c7766c012b3605a8538cb01d2f0363d038ed3d84c9d60826de1e23a80f"} Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.036140 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.040827 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.231253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072d38f-e7e3-4f83-a1d0-9220fabfd685-operator-scripts\") pod \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\" (UID: \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\") " Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.231386 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4flqw\" (UniqueName: \"kubernetes.io/projected/04701a36-4402-409c-86fb-4d4240226b7b-kube-api-access-4flqw\") pod \"04701a36-4402-409c-86fb-4d4240226b7b\" (UID: \"04701a36-4402-409c-86fb-4d4240226b7b\") " Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.231475 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04701a36-4402-409c-86fb-4d4240226b7b-operator-scripts\") pod \"04701a36-4402-409c-86fb-4d4240226b7b\" (UID: \"04701a36-4402-409c-86fb-4d4240226b7b\") " Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.231535 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms92v\" (UniqueName: \"kubernetes.io/projected/5072d38f-e7e3-4f83-a1d0-9220fabfd685-kube-api-access-ms92v\") pod \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\" (UID: \"5072d38f-e7e3-4f83-a1d0-9220fabfd685\") " Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.232066 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5072d38f-e7e3-4f83-a1d0-9220fabfd685-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5072d38f-e7e3-4f83-a1d0-9220fabfd685" (UID: "5072d38f-e7e3-4f83-a1d0-9220fabfd685"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.233299 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04701a36-4402-409c-86fb-4d4240226b7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04701a36-4402-409c-86fb-4d4240226b7b" (UID: "04701a36-4402-409c-86fb-4d4240226b7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.250138 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5072d38f-e7e3-4f83-a1d0-9220fabfd685-kube-api-access-ms92v" (OuterVolumeSpecName: "kube-api-access-ms92v") pod "5072d38f-e7e3-4f83-a1d0-9220fabfd685" (UID: "5072d38f-e7e3-4f83-a1d0-9220fabfd685"). InnerVolumeSpecName "kube-api-access-ms92v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.250270 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04701a36-4402-409c-86fb-4d4240226b7b-kube-api-access-4flqw" (OuterVolumeSpecName: "kube-api-access-4flqw") pod "04701a36-4402-409c-86fb-4d4240226b7b" (UID: "04701a36-4402-409c-86fb-4d4240226b7b"). InnerVolumeSpecName "kube-api-access-4flqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.333558 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04701a36-4402-409c-86fb-4d4240226b7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.333608 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms92v\" (UniqueName: \"kubernetes.io/projected/5072d38f-e7e3-4f83-a1d0-9220fabfd685-kube-api-access-ms92v\") on node \"crc\" DevicePath \"\"" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.333635 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072d38f-e7e3-4f83-a1d0-9220fabfd685-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.333654 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4flqw\" (UniqueName: \"kubernetes.io/projected/04701a36-4402-409c-86fb-4d4240226b7b-kube-api-access-4flqw\") on node \"crc\" DevicePath \"\"" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.518632 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" event={"ID":"04701a36-4402-409c-86fb-4d4240226b7b","Type":"ContainerDied","Data":"a5afb2ed98da902dd737462c1ac567810eb69265a7675623689d9da30d23bae8"} Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.518699 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5afb2ed98da902dd737462c1ac567810eb69265a7675623689d9da30d23bae8" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.518784 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-8363-account-create-update-9htwj" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.523679 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b2ld2" event={"ID":"5072d38f-e7e3-4f83-a1d0-9220fabfd685","Type":"ContainerDied","Data":"4e92e40b7583948fa0c6e9847b3896eba1f453ef728ce73ba0d27899076c216f"} Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.523734 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e92e40b7583948fa0c6e9847b3896eba1f453ef728ce73ba0d27899076c216f" Jan 31 07:08:07 crc kubenswrapper[4687]: I0131 07:08:07.523758 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b2ld2" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.146247 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-gp5jh"] Jan 31 07:08:13 crc kubenswrapper[4687]: E0131 07:08:13.147121 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5072d38f-e7e3-4f83-a1d0-9220fabfd685" containerName="mariadb-database-create" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.147137 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5072d38f-e7e3-4f83-a1d0-9220fabfd685" containerName="mariadb-database-create" Jan 31 07:08:13 crc kubenswrapper[4687]: E0131 07:08:13.147160 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04701a36-4402-409c-86fb-4d4240226b7b" containerName="mariadb-account-create-update" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.147168 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="04701a36-4402-409c-86fb-4d4240226b7b" containerName="mariadb-account-create-update" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.147360 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="04701a36-4402-409c-86fb-4d4240226b7b" containerName="mariadb-account-create-update" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.147385 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5072d38f-e7e3-4f83-a1d0-9220fabfd685" containerName="mariadb-database-create" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.147982 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.150667 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-jqjwm" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.153711 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-gp5jh"] Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.156093 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.332157 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-config-data\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.332209 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-db-sync-config-data\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.332448 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bl74\" (UniqueName: \"kubernetes.io/projected/a10d060e-1cac-4a26-bdd9-b9b98431ae40-kube-api-access-6bl74\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.434003 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bl74\" (UniqueName: \"kubernetes.io/projected/a10d060e-1cac-4a26-bdd9-b9b98431ae40-kube-api-access-6bl74\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.434095 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-config-data\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.434131 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-db-sync-config-data\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.450270 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-config-data\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.450872 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-db-sync-config-data\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.452265 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bl74\" (UniqueName: \"kubernetes.io/projected/a10d060e-1cac-4a26-bdd9-b9b98431ae40-kube-api-access-6bl74\") pod \"glance-db-sync-gp5jh\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:13 crc kubenswrapper[4687]: I0131 07:08:13.472212 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:17 crc kubenswrapper[4687]: I0131 07:08:17.462007 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-gp5jh"] Jan 31 07:08:17 crc kubenswrapper[4687]: W0131 07:08:17.464348 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda10d060e_1cac_4a26_bdd9_b9b98431ae40.slice/crio-a233841fa78e895e15953fa604255e2720e795a762389055ad1b43daad408d15 WatchSource:0}: Error finding container a233841fa78e895e15953fa604255e2720e795a762389055ad1b43daad408d15: Status 404 returned error can't find the container with id a233841fa78e895e15953fa604255e2720e795a762389055ad1b43daad408d15 Jan 31 07:08:17 crc kubenswrapper[4687]: I0131 07:08:17.922858 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-gp5jh" event={"ID":"a10d060e-1cac-4a26-bdd9-b9b98431ae40","Type":"ContainerStarted","Data":"a233841fa78e895e15953fa604255e2720e795a762389055ad1b43daad408d15"} Jan 31 07:08:17 crc kubenswrapper[4687]: I0131 07:08:17.925169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"16bf7b67-a057-4dcc-8c5d-8879e73a2932","Type":"ContainerStarted","Data":"810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2"} Jan 31 07:08:17 crc kubenswrapper[4687]: I0131 07:08:17.940144 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=3.969661563 podStartE2EDuration="15.940125114s" podCreationTimestamp="2026-01-31 07:08:02 +0000 UTC" firstStartedPulling="2026-01-31 07:08:05.095641968 +0000 UTC m=+1511.372901543" lastFinishedPulling="2026-01-31 07:08:17.066105519 +0000 UTC m=+1523.343365094" observedRunningTime="2026-01-31 07:08:17.939524037 +0000 UTC m=+1524.216783632" watchObservedRunningTime="2026-01-31 07:08:17.940125114 +0000 UTC m=+1524.217384689" Jan 31 07:08:28 crc kubenswrapper[4687]: I0131 07:08:28.683882 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:08:28 crc kubenswrapper[4687]: I0131 07:08:28.684524 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:08:28 crc kubenswrapper[4687]: I0131 07:08:28.684588 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 07:08:28 crc kubenswrapper[4687]: I0131 07:08:28.685228 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:08:28 crc kubenswrapper[4687]: I0131 07:08:28.685281 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" gracePeriod=600 Jan 31 07:08:29 crc kubenswrapper[4687]: I0131 07:08:29.091341 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" exitCode=0 Jan 31 07:08:29 crc kubenswrapper[4687]: I0131 07:08:29.091388 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c"} Jan 31 07:08:29 crc kubenswrapper[4687]: I0131 07:08:29.091450 4687 scope.go:117] "RemoveContainer" containerID="f4ad799ecadff0d9823e53b53153bf63acdd5cce54e7a1eb02184f7b2a6947f6" Jan 31 07:08:34 crc kubenswrapper[4687]: E0131 07:08:34.208048 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:08:34 crc kubenswrapper[4687]: E0131 07:08:34.250001 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 31 07:08:34 crc kubenswrapper[4687]: E0131 07:08:34.250152 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bl74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-gp5jh_glance-kuttl-tests(a10d060e-1cac-4a26-bdd9-b9b98431ae40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:08:34 crc kubenswrapper[4687]: E0131 07:08:34.251316 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="glance-kuttl-tests/glance-db-sync-gp5jh" podUID="a10d060e-1cac-4a26-bdd9-b9b98431ae40" Jan 31 07:08:35 crc kubenswrapper[4687]: I0131 07:08:35.143435 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:08:35 crc kubenswrapper[4687]: E0131 07:08:35.144031 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:08:35 crc kubenswrapper[4687]: E0131 07:08:35.144619 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="glance-kuttl-tests/glance-db-sync-gp5jh" podUID="a10d060e-1cac-4a26-bdd9-b9b98431ae40" Jan 31 07:08:45 crc kubenswrapper[4687]: I0131 07:08:45.607632 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:08:45 crc kubenswrapper[4687]: E0131 07:08:45.608448 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:08:49 crc kubenswrapper[4687]: I0131 07:08:49.445119 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-gp5jh" event={"ID":"a10d060e-1cac-4a26-bdd9-b9b98431ae40","Type":"ContainerStarted","Data":"ed66e649a15f0fc6ad9b0c05104cfb0b1697da8d3a52b8eb932bc7cf80e0109a"} Jan 31 07:08:49 crc kubenswrapper[4687]: I0131 07:08:49.464373 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-gp5jh" podStartSLOduration=5.165101529 podStartE2EDuration="36.464348815s" podCreationTimestamp="2026-01-31 07:08:13 +0000 UTC" firstStartedPulling="2026-01-31 07:08:17.466764892 +0000 UTC m=+1523.744024467" lastFinishedPulling="2026-01-31 07:08:48.766012148 +0000 UTC m=+1555.043271753" observedRunningTime="2026-01-31 07:08:49.457892799 +0000 UTC m=+1555.735152384" watchObservedRunningTime="2026-01-31 07:08:49.464348815 +0000 UTC m=+1555.741608390" Jan 31 07:08:56 crc kubenswrapper[4687]: I0131 07:08:56.590834 4687 generic.go:334] "Generic (PLEG): container finished" podID="a10d060e-1cac-4a26-bdd9-b9b98431ae40" containerID="ed66e649a15f0fc6ad9b0c05104cfb0b1697da8d3a52b8eb932bc7cf80e0109a" exitCode=0 Jan 31 07:08:56 crc kubenswrapper[4687]: I0131 07:08:56.590914 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-gp5jh" event={"ID":"a10d060e-1cac-4a26-bdd9-b9b98431ae40","Type":"ContainerDied","Data":"ed66e649a15f0fc6ad9b0c05104cfb0b1697da8d3a52b8eb932bc7cf80e0109a"} Jan 31 07:08:56 crc kubenswrapper[4687]: I0131 07:08:56.603010 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:08:56 crc kubenswrapper[4687]: E0131 07:08:56.603371 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:08:57 crc kubenswrapper[4687]: I0131 07:08:57.853558 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:08:57 crc kubenswrapper[4687]: I0131 07:08:57.990422 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bl74\" (UniqueName: \"kubernetes.io/projected/a10d060e-1cac-4a26-bdd9-b9b98431ae40-kube-api-access-6bl74\") pod \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " Jan 31 07:08:57 crc kubenswrapper[4687]: I0131 07:08:57.990502 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-db-sync-config-data\") pod \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " Jan 31 07:08:57 crc kubenswrapper[4687]: I0131 07:08:57.990587 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-config-data\") pod \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\" (UID: \"a10d060e-1cac-4a26-bdd9-b9b98431ae40\") " Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.154025 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10d060e-1cac-4a26-bdd9-b9b98431ae40-kube-api-access-6bl74" (OuterVolumeSpecName: "kube-api-access-6bl74") pod "a10d060e-1cac-4a26-bdd9-b9b98431ae40" (UID: "a10d060e-1cac-4a26-bdd9-b9b98431ae40"). InnerVolumeSpecName "kube-api-access-6bl74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.154565 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a10d060e-1cac-4a26-bdd9-b9b98431ae40" (UID: "a10d060e-1cac-4a26-bdd9-b9b98431ae40"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.200873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-config-data" (OuterVolumeSpecName: "config-data") pod "a10d060e-1cac-4a26-bdd9-b9b98431ae40" (UID: "a10d060e-1cac-4a26-bdd9-b9b98431ae40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.249961 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bl74\" (UniqueName: \"kubernetes.io/projected/a10d060e-1cac-4a26-bdd9-b9b98431ae40-kube-api-access-6bl74\") on node \"crc\" DevicePath \"\"" Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.250265 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.250370 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10d060e-1cac-4a26-bdd9-b9b98431ae40-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.613256 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-gp5jh" event={"ID":"a10d060e-1cac-4a26-bdd9-b9b98431ae40","Type":"ContainerDied","Data":"a233841fa78e895e15953fa604255e2720e795a762389055ad1b43daad408d15"} Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.613308 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a233841fa78e895e15953fa604255e2720e795a762389055ad1b43daad408d15" Jan 31 07:08:58 crc kubenswrapper[4687]: I0131 07:08:58.613338 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-gp5jh" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.045564 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:01 crc kubenswrapper[4687]: E0131 07:09:01.046123 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10d060e-1cac-4a26-bdd9-b9b98431ae40" containerName="glance-db-sync" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.046142 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10d060e-1cac-4a26-bdd9-b9b98431ae40" containerName="glance-db-sync" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.046311 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10d060e-1cac-4a26-bdd9-b9b98431ae40" containerName="glance-db-sync" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.047032 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.048621 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-jqjwm" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.049767 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.051869 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.067397 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.087731 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.089077 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.122584 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205481 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205552 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-scripts\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205587 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-logs\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205722 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205774 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-run\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205819 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwzq\" (UniqueName: \"kubernetes.io/projected/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-kube-api-access-8xwzq\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205867 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-lib-modules\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205889 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205934 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-nvme\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.205976 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-logs\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206085 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206117 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-httpd-run\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206156 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724zf\" (UniqueName: \"kubernetes.io/projected/45d3b41c-1737-4ccb-8584-cdb9c01026f2-kube-api-access-724zf\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206187 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-config-data\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206312 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206358 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-sys\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206380 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206436 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-run\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206474 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-scripts\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206533 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-dev\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-dev\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206645 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-lib-modules\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206671 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206691 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-sys\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206728 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206762 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-httpd-run\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.206827 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-config-data\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.269202 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:01 crc kubenswrapper[4687]: E0131 07:09:01.269934 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data dev etc-iscsi etc-nvme glance glance-cache httpd-run kube-api-access-8xwzq lib-modules logs run scripts sys var-locks-brick], unattached volumes=[], failed to process volumes=[]: context canceled" pod="glance-kuttl-tests/glance-default-single-1" podUID="d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.308859 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.308917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-scripts\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.308938 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-logs\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.308973 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.308999 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-run\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309024 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xwzq\" (UniqueName: \"kubernetes.io/projected/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-kube-api-access-8xwzq\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-lib-modules\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309068 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-nvme\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309121 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-logs\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309150 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309172 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-httpd-run\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309198 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724zf\" (UniqueName: \"kubernetes.io/projected/45d3b41c-1737-4ccb-8584-cdb9c01026f2-kube-api-access-724zf\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-config-data\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309233 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309255 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-sys\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-run\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309338 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-nvme\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309355 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-scripts\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-dev\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-dev\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309549 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-lib-modules\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-sys\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309635 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309658 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-httpd-run\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309691 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-config-data\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309731 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.309892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310116 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310444 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-logs\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310497 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310504 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-dev\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310601 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-lib-modules\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310625 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-sys\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310645 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310729 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310804 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-lib-modules\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310818 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") device mount path \"/mnt/openstack/pv20\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310879 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-dev\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.310947 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-run\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.311027 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-run\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.311210 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-httpd-run\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.311208 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-httpd-run\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.311268 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-logs\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.313564 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-sys\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.323754 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-config-data\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.324053 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-scripts\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.334274 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-scripts\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.336005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-config-data\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.336342 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xwzq\" (UniqueName: \"kubernetes.io/projected/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-kube-api-access-8xwzq\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.341689 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724zf\" (UniqueName: \"kubernetes.io/projected/45d3b41c-1737-4ccb-8584-cdb9c01026f2-kube-api-access-724zf\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.358186 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.369308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.369353 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.373973 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.388624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.632788 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.643014 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.649623 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.714909 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-logs\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.714975 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715002 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715029 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-nvme\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715066 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-dev\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715148 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-run\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715366 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-iscsi\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715388 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-sys\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715441 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-config-data\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715446 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-logs" (OuterVolumeSpecName: "logs") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715470 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xwzq\" (UniqueName: \"kubernetes.io/projected/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-kube-api-access-8xwzq\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-run" (OuterVolumeSpecName: "run") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715521 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-var-locks-brick\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715573 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-httpd-run\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715574 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715622 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-lib-modules\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715664 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-scripts\") pod \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\" (UID: \"d2e18917-2f0a-49c5-9d2d-ade90bb3fdee\") " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715812 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-dev" (OuterVolumeSpecName: "dev") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715855 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.715970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-sys" (OuterVolumeSpecName: "sys") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716030 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716100 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716311 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716335 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716346 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716358 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716370 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716380 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716397 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.716428 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.717476 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.719668 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-config-data" (OuterVolumeSpecName: "config-data") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.719671 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage20-crc" (OuterVolumeSpecName: "glance-cache") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "local-storage20-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.719789 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-scripts" (OuterVolumeSpecName: "scripts") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.720021 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.720720 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-kube-api-access-8xwzq" (OuterVolumeSpecName: "kube-api-access-8xwzq") pod "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" (UID: "d2e18917-2f0a-49c5-9d2d-ade90bb3fdee"). InnerVolumeSpecName "kube-api-access-8xwzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.818148 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.818217 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.818233 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.818243 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.818252 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.818261 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xwzq\" (UniqueName: \"kubernetes.io/projected/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee-kube-api-access-8xwzq\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.831796 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.832731 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage20-crc" (UniqueName: "kubernetes.io/local-volume/local-storage20-crc") on node "crc" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.920139 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:01 crc kubenswrapper[4687]: I0131 07:09:01.920398 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.643750 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.643806 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"45d3b41c-1737-4ccb-8584-cdb9c01026f2","Type":"ContainerStarted","Data":"2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172"} Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.644180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"45d3b41c-1737-4ccb-8584-cdb9c01026f2","Type":"ContainerStarted","Data":"40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9"} Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.644210 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"45d3b41c-1737-4ccb-8584-cdb9c01026f2","Type":"ContainerStarted","Data":"4159f7e0e4a4211aa045731c784da12714eac5fbb2e0f76382262877de48e1d0"} Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.676605 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.676579743 podStartE2EDuration="2.676579743s" podCreationTimestamp="2026-01-31 07:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:09:02.673710304 +0000 UTC m=+1568.950969899" watchObservedRunningTime="2026-01-31 07:09:02.676579743 +0000 UTC m=+1568.953839318" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.722299 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.722364 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.750707 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.752043 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.761903 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938233 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-logs\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938306 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938335 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-dev\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938483 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9fb\" (UniqueName: \"kubernetes.io/projected/3b10021a-2d1a-45ce-855b-c83f06fe15d0-kube-api-access-5z9fb\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-sys\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938525 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-config-data\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-httpd-run\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938585 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-lib-modules\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938611 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-scripts\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938637 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:02 crc kubenswrapper[4687]: I0131 07:09:02.938689 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-run\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039544 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-scripts\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039664 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-run\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039709 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-logs\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039743 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039764 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039788 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-dev\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039883 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z9fb\" (UniqueName: \"kubernetes.io/projected/3b10021a-2d1a-45ce-855b-c83f06fe15d0-kube-api-access-5z9fb\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039901 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-sys\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039923 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-config-data\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039945 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-httpd-run\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.039993 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-lib-modules\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.040080 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-lib-modules\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.040819 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") device mount path \"/mnt/openstack/pv20\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.040850 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-nvme\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.040931 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-run\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.041284 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.041339 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.041443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.041483 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-sys\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.041444 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-dev\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.041942 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-httpd-run\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.041976 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-logs\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.054212 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-config-data\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.148161 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-scripts\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.165147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.166123 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z9fb\" (UniqueName: \"kubernetes.io/projected/3b10021a-2d1a-45ce-855b-c83f06fe15d0-kube-api-access-5z9fb\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.194527 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-1\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.377464 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:03 crc kubenswrapper[4687]: I0131 07:09:03.703545 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e18917-2f0a-49c5-9d2d-ade90bb3fdee" path="/var/lib/kubelet/pods/d2e18917-2f0a-49c5-9d2d-ade90bb3fdee/volumes" Jan 31 07:09:04 crc kubenswrapper[4687]: I0131 07:09:04.079002 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:04 crc kubenswrapper[4687]: W0131 07:09:04.188559 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b10021a_2d1a_45ce_855b_c83f06fe15d0.slice/crio-9abe1e0c55ac7fd3741e100320c62d8ef210303b5e0d071bf2e64e6aee5ec807 WatchSource:0}: Error finding container 9abe1e0c55ac7fd3741e100320c62d8ef210303b5e0d071bf2e64e6aee5ec807: Status 404 returned error can't find the container with id 9abe1e0c55ac7fd3741e100320c62d8ef210303b5e0d071bf2e64e6aee5ec807 Jan 31 07:09:04 crc kubenswrapper[4687]: I0131 07:09:04.714939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3b10021a-2d1a-45ce-855b-c83f06fe15d0","Type":"ContainerStarted","Data":"782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3"} Jan 31 07:09:04 crc kubenswrapper[4687]: I0131 07:09:04.715530 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3b10021a-2d1a-45ce-855b-c83f06fe15d0","Type":"ContainerStarted","Data":"9abe1e0c55ac7fd3741e100320c62d8ef210303b5e0d071bf2e64e6aee5ec807"} Jan 31 07:09:05 crc kubenswrapper[4687]: I0131 07:09:05.722053 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3b10021a-2d1a-45ce-855b-c83f06fe15d0","Type":"ContainerStarted","Data":"0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481"} Jan 31 07:09:05 crc kubenswrapper[4687]: I0131 07:09:05.750991 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=3.7509730230000002 podStartE2EDuration="3.750973023s" podCreationTimestamp="2026-01-31 07:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:09:05.743402876 +0000 UTC m=+1572.020662451" watchObservedRunningTime="2026-01-31 07:09:05.750973023 +0000 UTC m=+1572.028232588" Jan 31 07:09:10 crc kubenswrapper[4687]: I0131 07:09:10.604045 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:09:10 crc kubenswrapper[4687]: E0131 07:09:10.605223 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:09:11 crc kubenswrapper[4687]: I0131 07:09:11.374926 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:11 crc kubenswrapper[4687]: I0131 07:09:11.374983 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:11 crc kubenswrapper[4687]: I0131 07:09:11.410076 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:11 crc kubenswrapper[4687]: I0131 07:09:11.422590 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:11 crc kubenswrapper[4687]: I0131 07:09:11.774831 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:11 crc kubenswrapper[4687]: I0131 07:09:11.774928 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:13 crc kubenswrapper[4687]: I0131 07:09:13.378190 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:13 crc kubenswrapper[4687]: I0131 07:09:13.378263 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:13 crc kubenswrapper[4687]: I0131 07:09:13.413999 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:13 crc kubenswrapper[4687]: I0131 07:09:13.421498 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:14 crc kubenswrapper[4687]: I0131 07:09:14.043256 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:14 crc kubenswrapper[4687]: I0131 07:09:14.043619 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:14 crc kubenswrapper[4687]: I0131 07:09:14.720521 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:14 crc kubenswrapper[4687]: I0131 07:09:14.720614 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:09:14 crc kubenswrapper[4687]: I0131 07:09:14.985309 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.055584 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.055856 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.598093 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.599597 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.669571 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.669810 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-log" containerID="cri-o://40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9" gracePeriod=30 Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.669948 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-httpd" containerID="cri-o://2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172" gracePeriod=30 Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.691637 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.100:9292/healthcheck\": EOF" Jan 31 07:09:16 crc kubenswrapper[4687]: I0131 07:09:16.691665 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.100:9292/healthcheck\": EOF" Jan 31 07:09:17 crc kubenswrapper[4687]: I0131 07:09:17.063658 4687 generic.go:334] "Generic (PLEG): container finished" podID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerID="40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9" exitCode=143 Jan 31 07:09:17 crc kubenswrapper[4687]: I0131 07:09:17.064346 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"45d3b41c-1737-4ccb-8584-cdb9c01026f2","Type":"ContainerDied","Data":"40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9"} Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.508477 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.603872 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:09:21 crc kubenswrapper[4687]: E0131 07:09:21.604518 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707290 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-dev\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707332 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-run\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707359 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-nvme\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707396 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-httpd-run\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707458 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707481 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-724zf\" (UniqueName: \"kubernetes.io/projected/45d3b41c-1737-4ccb-8584-cdb9c01026f2-kube-api-access-724zf\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707506 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-sys\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707532 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-config-data\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707556 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707596 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-scripts\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-lib-modules\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707634 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-iscsi\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707682 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-var-locks-brick\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.707711 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-logs\") pod \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\" (UID: \"45d3b41c-1737-4ccb-8584-cdb9c01026f2\") " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.708330 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-logs" (OuterVolumeSpecName: "logs") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.708374 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-sys" (OuterVolumeSpecName: "sys") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.708393 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-run" (OuterVolumeSpecName: "run") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.708392 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-dev" (OuterVolumeSpecName: "dev") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.708426 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.708545 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.709133 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.709169 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.709193 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.714044 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.715875 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-scripts" (OuterVolumeSpecName: "scripts") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.716120 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d3b41c-1737-4ccb-8584-cdb9c01026f2-kube-api-access-724zf" (OuterVolumeSpecName: "kube-api-access-724zf") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "kube-api-access-724zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.728159 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.744475 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-config-data" (OuterVolumeSpecName: "config-data") pod "45d3b41c-1737-4ccb-8584-cdb9c01026f2" (UID: "45d3b41c-1737-4ccb-8584-cdb9c01026f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808847 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808881 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808917 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808930 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-724zf\" (UniqueName: \"kubernetes.io/projected/45d3b41c-1737-4ccb-8584-cdb9c01026f2-kube-api-access-724zf\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808943 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808961 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808976 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808987 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d3b41c-1737-4ccb-8584-cdb9c01026f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.808997 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.809009 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.809019 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.809028 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d3b41c-1737-4ccb-8584-cdb9c01026f2-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.809037 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.809047 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/45d3b41c-1737-4ccb-8584-cdb9c01026f2-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.824222 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.828181 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.910332 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:21 crc kubenswrapper[4687]: I0131 07:09:21.910372 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.101329 4687 generic.go:334] "Generic (PLEG): container finished" podID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerID="2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172" exitCode=0 Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.101387 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"45d3b41c-1737-4ccb-8584-cdb9c01026f2","Type":"ContainerDied","Data":"2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172"} Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.101430 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.101457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"45d3b41c-1737-4ccb-8584-cdb9c01026f2","Type":"ContainerDied","Data":"4159f7e0e4a4211aa045731c784da12714eac5fbb2e0f76382262877de48e1d0"} Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.101480 4687 scope.go:117] "RemoveContainer" containerID="2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.121120 4687 scope.go:117] "RemoveContainer" containerID="40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.139067 4687 scope.go:117] "RemoveContainer" containerID="2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172" Jan 31 07:09:22 crc kubenswrapper[4687]: E0131 07:09:22.139471 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172\": container with ID starting with 2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172 not found: ID does not exist" containerID="2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.139515 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172"} err="failed to get container status \"2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172\": rpc error: code = NotFound desc = could not find container \"2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172\": container with ID starting with 2d92c8c97c92a13ce98bf3d9cd78cac1ea17472dce8102ae11369949d847d172 not found: ID does not exist" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.139540 4687 scope.go:117] "RemoveContainer" containerID="40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9" Jan 31 07:09:22 crc kubenswrapper[4687]: E0131 07:09:22.140037 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9\": container with ID starting with 40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9 not found: ID does not exist" containerID="40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.140160 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9"} err="failed to get container status \"40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9\": rpc error: code = NotFound desc = could not find container \"40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9\": container with ID starting with 40b5557a569a0977e2721a24e166e421b3e9b4c2e46750387dcf046501a5dfe9 not found: ID does not exist" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.141479 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.149980 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.157294 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:22 crc kubenswrapper[4687]: E0131 07:09:22.157590 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-log" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.157606 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-log" Jan 31 07:09:22 crc kubenswrapper[4687]: E0131 07:09:22.157628 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-httpd" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.157634 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-httpd" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.157779 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-httpd" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.157800 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" containerName="glance-log" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.158516 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.177215 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.316755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-sys\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.316805 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.316824 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-scripts\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.316839 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-logs\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.316857 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-run\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.316879 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.316974 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-nvme\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.317137 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-lib-modules\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.317186 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.317256 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-dev\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.317333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-config-data\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.317453 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.317519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-httpd-run\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.317580 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhcx\" (UniqueName: \"kubernetes.io/projected/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-kube-api-access-zzhcx\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419355 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-lib-modules\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-dev\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419504 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-config-data\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419542 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-lib-modules\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419594 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419635 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-httpd-run\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419728 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhcx\" (UniqueName: \"kubernetes.io/projected/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-kube-api-access-zzhcx\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419772 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-sys\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419798 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419825 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-scripts\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419846 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-logs\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-run\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419903 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419933 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-nvme\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.420012 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-nvme\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.420268 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-httpd-run\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.420297 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-sys\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.420331 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-run\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.420399 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.420588 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-logs\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.420634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.419586 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-dev\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.429010 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-config-data\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.435218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-scripts\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.437548 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhcx\" (UniqueName: \"kubernetes.io/projected/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-kube-api-access-zzhcx\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.447873 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.451313 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-single-0\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.477558 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:22 crc kubenswrapper[4687]: I0131 07:09:22.732603 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:22 crc kubenswrapper[4687]: W0131 07:09:22.734472 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd5490fd_a889_4b5c_bc5e_bd6c7dbee499.slice/crio-148c7a233d04ae85d4b07a7d670761c2b056fce6864ae02e7a4c88cd349179c4 WatchSource:0}: Error finding container 148c7a233d04ae85d4b07a7d670761c2b056fce6864ae02e7a4c88cd349179c4: Status 404 returned error can't find the container with id 148c7a233d04ae85d4b07a7d670761c2b056fce6864ae02e7a4c88cd349179c4 Jan 31 07:09:23 crc kubenswrapper[4687]: I0131 07:09:23.121122 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499","Type":"ContainerStarted","Data":"1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7"} Jan 31 07:09:23 crc kubenswrapper[4687]: I0131 07:09:23.121508 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499","Type":"ContainerStarted","Data":"e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f"} Jan 31 07:09:23 crc kubenswrapper[4687]: I0131 07:09:23.121520 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499","Type":"ContainerStarted","Data":"148c7a233d04ae85d4b07a7d670761c2b056fce6864ae02e7a4c88cd349179c4"} Jan 31 07:09:23 crc kubenswrapper[4687]: I0131 07:09:23.146252 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=1.146221183 podStartE2EDuration="1.146221183s" podCreationTimestamp="2026-01-31 07:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:09:23.144667131 +0000 UTC m=+1589.421926706" watchObservedRunningTime="2026-01-31 07:09:23.146221183 +0000 UTC m=+1589.423480758" Jan 31 07:09:23 crc kubenswrapper[4687]: I0131 07:09:23.612059 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d3b41c-1737-4ccb-8584-cdb9c01026f2" path="/var/lib/kubelet/pods/45d3b41c-1737-4ccb-8584-cdb9c01026f2/volumes" Jan 31 07:09:32 crc kubenswrapper[4687]: I0131 07:09:32.478288 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:32 crc kubenswrapper[4687]: I0131 07:09:32.478982 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:32 crc kubenswrapper[4687]: I0131 07:09:32.517301 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:32 crc kubenswrapper[4687]: I0131 07:09:32.523150 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:33 crc kubenswrapper[4687]: I0131 07:09:33.188679 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:33 crc kubenswrapper[4687]: I0131 07:09:33.189300 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:35 crc kubenswrapper[4687]: I0131 07:09:35.216004 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:09:35 crc kubenswrapper[4687]: I0131 07:09:35.216336 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:09:35 crc kubenswrapper[4687]: I0131 07:09:35.282327 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:35 crc kubenswrapper[4687]: I0131 07:09:35.502844 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:35 crc kubenswrapper[4687]: I0131 07:09:35.607080 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:09:35 crc kubenswrapper[4687]: E0131 07:09:35.607381 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:09:46 crc kubenswrapper[4687]: I0131 07:09:46.603943 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:09:46 crc kubenswrapper[4687]: E0131 07:09:46.604750 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.276312 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-gp5jh"] Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.283252 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-gp5jh"] Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.360153 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance8363-account-delete-ds2fg"] Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.361216 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.377047 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.377303 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-log" containerID="cri-o://e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f" gracePeriod=30 Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.377343 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-httpd" containerID="cri-o://1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7" gracePeriod=30 Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.398644 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.398970 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerName="glance-httpd" containerID="cri-o://0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481" gracePeriod=30 Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.398921 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerName="glance-log" containerID="cri-o://782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3" gracePeriod=30 Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.406472 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed5beb8-6653-416b-b40d-4ee21cdd1568-operator-scripts\") pod \"glance8363-account-delete-ds2fg\" (UID: \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\") " pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.406559 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwkqj\" (UniqueName: \"kubernetes.io/projected/2ed5beb8-6653-416b-b40d-4ee21cdd1568-kube-api-access-hwkqj\") pod \"glance8363-account-delete-ds2fg\" (UID: \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\") " pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.411503 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance8363-account-delete-ds2fg"] Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.464846 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.465070 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstackclient" podUID="16bf7b67-a057-4dcc-8c5d-8879e73a2932" containerName="openstackclient" containerID="cri-o://810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2" gracePeriod=30 Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.507929 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed5beb8-6653-416b-b40d-4ee21cdd1568-operator-scripts\") pod \"glance8363-account-delete-ds2fg\" (UID: \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\") " pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.508045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwkqj\" (UniqueName: \"kubernetes.io/projected/2ed5beb8-6653-416b-b40d-4ee21cdd1568-kube-api-access-hwkqj\") pod \"glance8363-account-delete-ds2fg\" (UID: \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\") " pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.509099 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed5beb8-6653-416b-b40d-4ee21cdd1568-operator-scripts\") pod \"glance8363-account-delete-ds2fg\" (UID: \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\") " pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.532697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwkqj\" (UniqueName: \"kubernetes.io/projected/2ed5beb8-6653-416b-b40d-4ee21cdd1568-kube-api-access-hwkqj\") pod \"glance8363-account-delete-ds2fg\" (UID: \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\") " pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.693892 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.850304 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.915227 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjbvb\" (UniqueName: \"kubernetes.io/projected/16bf7b67-a057-4dcc-8c5d-8879e73a2932-kube-api-access-mjbvb\") pod \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.915361 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config\") pod \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.915465 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config-secret\") pod \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.915554 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-scripts\") pod \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\" (UID: \"16bf7b67-a057-4dcc-8c5d-8879e73a2932\") " Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.916608 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-scripts" (OuterVolumeSpecName: "openstack-scripts") pod "16bf7b67-a057-4dcc-8c5d-8879e73a2932" (UID: "16bf7b67-a057-4dcc-8c5d-8879e73a2932"). InnerVolumeSpecName "openstack-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.923686 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bf7b67-a057-4dcc-8c5d-8879e73a2932-kube-api-access-mjbvb" (OuterVolumeSpecName: "kube-api-access-mjbvb") pod "16bf7b67-a057-4dcc-8c5d-8879e73a2932" (UID: "16bf7b67-a057-4dcc-8c5d-8879e73a2932"). InnerVolumeSpecName "kube-api-access-mjbvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.937507 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "16bf7b67-a057-4dcc-8c5d-8879e73a2932" (UID: "16bf7b67-a057-4dcc-8c5d-8879e73a2932"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:52 crc kubenswrapper[4687]: I0131 07:09:52.956088 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "16bf7b67-a057-4dcc-8c5d-8879e73a2932" (UID: "16bf7b67-a057-4dcc-8c5d-8879e73a2932"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.016854 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.016899 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.016909 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjbvb\" (UniqueName: \"kubernetes.io/projected/16bf7b67-a057-4dcc-8c5d-8879e73a2932-kube-api-access-mjbvb\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.016920 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/16bf7b67-a057-4dcc-8c5d-8879e73a2932-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.170146 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance8363-account-delete-ds2fg"] Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.350501 4687 generic.go:334] "Generic (PLEG): container finished" podID="16bf7b67-a057-4dcc-8c5d-8879e73a2932" containerID="810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2" exitCode=143 Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.350574 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.350594 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"16bf7b67-a057-4dcc-8c5d-8879e73a2932","Type":"ContainerDied","Data":"810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2"} Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.350629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"16bf7b67-a057-4dcc-8c5d-8879e73a2932","Type":"ContainerDied","Data":"2d17c8c7766c012b3605a8538cb01d2f0363d038ed3d84c9d60826de1e23a80f"} Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.350646 4687 scope.go:117] "RemoveContainer" containerID="810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.352979 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" event={"ID":"2ed5beb8-6653-416b-b40d-4ee21cdd1568","Type":"ContainerStarted","Data":"7841fe3e7d07959a672a1a0f6799e03dd0adf7211d2563b1d278f8c19040034d"} Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.353020 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" event={"ID":"2ed5beb8-6653-416b-b40d-4ee21cdd1568","Type":"ContainerStarted","Data":"66d6d9061ed15cdd1dcca8a80ace0932871459f206be6e8c5631f4b0ece13c40"} Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.356261 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerID="782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3" exitCode=143 Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.356341 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3b10021a-2d1a-45ce-855b-c83f06fe15d0","Type":"ContainerDied","Data":"782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3"} Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.360332 4687 generic.go:334] "Generic (PLEG): container finished" podID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerID="e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f" exitCode=143 Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.360499 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499","Type":"ContainerDied","Data":"e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f"} Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.382678 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" podStartSLOduration=1.3826583000000001 podStartE2EDuration="1.3826583s" podCreationTimestamp="2026-01-31 07:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:09:53.377505679 +0000 UTC m=+1619.654765274" watchObservedRunningTime="2026-01-31 07:09:53.3826583 +0000 UTC m=+1619.659917885" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.386557 4687 scope.go:117] "RemoveContainer" containerID="810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2" Jan 31 07:09:53 crc kubenswrapper[4687]: E0131 07:09:53.387928 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2\": container with ID starting with 810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2 not found: ID does not exist" containerID="810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.387971 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2"} err="failed to get container status \"810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2\": rpc error: code = NotFound desc = could not find container \"810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2\": container with ID starting with 810fe131ab19b33f129ced45c7b3944fe322120df73bad7044e7414e110423a2 not found: ID does not exist" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.397314 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.405586 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.613367 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bf7b67-a057-4dcc-8c5d-8879e73a2932" path="/var/lib/kubelet/pods/16bf7b67-a057-4dcc-8c5d-8879e73a2932/volumes" Jan 31 07:09:53 crc kubenswrapper[4687]: I0131 07:09:53.614128 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10d060e-1cac-4a26-bdd9-b9b98431ae40" path="/var/lib/kubelet/pods/a10d060e-1cac-4a26-bdd9-b9b98431ae40/volumes" Jan 31 07:09:54 crc kubenswrapper[4687]: I0131 07:09:54.369979 4687 generic.go:334] "Generic (PLEG): container finished" podID="2ed5beb8-6653-416b-b40d-4ee21cdd1568" containerID="7841fe3e7d07959a672a1a0f6799e03dd0adf7211d2563b1d278f8c19040034d" exitCode=0 Jan 31 07:09:54 crc kubenswrapper[4687]: I0131 07:09:54.370017 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" event={"ID":"2ed5beb8-6653-416b-b40d-4ee21cdd1568","Type":"ContainerDied","Data":"7841fe3e7d07959a672a1a0f6799e03dd0adf7211d2563b1d278f8c19040034d"} Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.672207 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.760911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwkqj\" (UniqueName: \"kubernetes.io/projected/2ed5beb8-6653-416b-b40d-4ee21cdd1568-kube-api-access-hwkqj\") pod \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\" (UID: \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.761033 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed5beb8-6653-416b-b40d-4ee21cdd1568-operator-scripts\") pod \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\" (UID: \"2ed5beb8-6653-416b-b40d-4ee21cdd1568\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.761805 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed5beb8-6653-416b-b40d-4ee21cdd1568-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ed5beb8-6653-416b-b40d-4ee21cdd1568" (UID: "2ed5beb8-6653-416b-b40d-4ee21cdd1568"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.766613 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed5beb8-6653-416b-b40d-4ee21cdd1568-kube-api-access-hwkqj" (OuterVolumeSpecName: "kube-api-access-hwkqj") pod "2ed5beb8-6653-416b-b40d-4ee21cdd1568" (UID: "2ed5beb8-6653-416b-b40d-4ee21cdd1568"). InnerVolumeSpecName "kube-api-access-hwkqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.809986 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.103:9292/healthcheck\": read tcp 10.217.0.2:57556->10.217.0.103:9292: read: connection reset by peer" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.810066 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.103:9292/healthcheck\": read tcp 10.217.0.2:57546->10.217.0.103:9292: read: connection reset by peer" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.863031 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwkqj\" (UniqueName: \"kubernetes.io/projected/2ed5beb8-6653-416b-b40d-4ee21cdd1568-kube-api-access-hwkqj\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.863068 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed5beb8-6653-416b-b40d-4ee21cdd1568-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.945560 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963630 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-httpd-run\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963683 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-nvme\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963782 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963801 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963856 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-config-data\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963882 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z9fb\" (UniqueName: \"kubernetes.io/projected/3b10021a-2d1a-45ce-855b-c83f06fe15d0-kube-api-access-5z9fb\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963904 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-lib-modules\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963920 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-iscsi\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963944 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-dev\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963977 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-scripts\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964003 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-sys\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-logs\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964037 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-run\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964053 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-var-locks-brick\") pod \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\" (UID: \"3b10021a-2d1a-45ce-855b-c83f06fe15d0\") " Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.963940 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964353 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964660 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964716 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964737 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964757 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-dev" (OuterVolumeSpecName: "dev") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.964778 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-sys" (OuterVolumeSpecName: "sys") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.965726 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-run" (OuterVolumeSpecName: "run") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.965968 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-logs" (OuterVolumeSpecName: "logs") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.968352 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b10021a-2d1a-45ce-855b-c83f06fe15d0-kube-api-access-5z9fb" (OuterVolumeSpecName: "kube-api-access-5z9fb") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "kube-api-access-5z9fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.968847 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-scripts" (OuterVolumeSpecName: "scripts") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.968861 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:09:55 crc kubenswrapper[4687]: I0131 07:09:55.971614 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage20-crc" (OuterVolumeSpecName: "glance-cache") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "local-storage20-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.019510 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-config-data" (OuterVolumeSpecName: "config-data") pod "3b10021a-2d1a-45ce-855b-c83f06fe15d0" (UID: "3b10021a-2d1a-45ce-855b-c83f06fe15d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065305 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065627 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065640 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065651 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z9fb\" (UniqueName: \"kubernetes.io/projected/3b10021a-2d1a-45ce-855b-c83f06fe15d0-kube-api-access-5z9fb\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065662 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065670 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065678 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065686 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b10021a-2d1a-45ce-855b-c83f06fe15d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065694 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065700 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065708 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065715 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065725 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b10021a-2d1a-45ce-855b-c83f06fe15d0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.065732 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b10021a-2d1a-45ce-855b-c83f06fe15d0-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.082617 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.082617 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage20-crc" (UniqueName: "kubernetes.io/local-volume/local-storage20-crc") on node "crc" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.158496 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.167566 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.167598 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268498 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-httpd-run\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268571 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-lib-modules\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268597 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268622 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-iscsi\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268654 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-var-locks-brick\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268670 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268685 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268705 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-dev\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268741 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-nvme\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268776 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-scripts\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268794 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-config-data\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268841 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-run\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268905 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzhcx\" (UniqueName: \"kubernetes.io/projected/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-kube-api-access-zzhcx\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268961 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-logs\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.268984 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-sys\") pod \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\" (UID: \"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499\") " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269039 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-dev" (OuterVolumeSpecName: "dev") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269063 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269143 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-run" (OuterVolumeSpecName: "run") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269211 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-sys" (OuterVolumeSpecName: "sys") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269257 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269450 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269491 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269505 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269515 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269525 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269534 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269543 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269553 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.269538 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-logs" (OuterVolumeSpecName: "logs") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.273177 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.273302 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-scripts" (OuterVolumeSpecName: "scripts") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.273370 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-kube-api-access-zzhcx" (OuterVolumeSpecName: "kube-api-access-zzhcx") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "kube-api-access-zzhcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.274572 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.313616 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-config-data" (OuterVolumeSpecName: "config-data") pod "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" (UID: "bd5490fd-a889-4b5c-bc5e-bd6c7dbee499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.370392 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.370469 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.370484 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.370496 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.370505 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.370515 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzhcx\" (UniqueName: \"kubernetes.io/projected/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499-kube-api-access-zzhcx\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.382550 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.385134 4687 generic.go:334] "Generic (PLEG): container finished" podID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerID="1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7" exitCode=0 Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.385196 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.385201 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499","Type":"ContainerDied","Data":"1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7"} Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.385230 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"bd5490fd-a889-4b5c-bc5e-bd6c7dbee499","Type":"ContainerDied","Data":"148c7a233d04ae85d4b07a7d670761c2b056fce6864ae02e7a4c88cd349179c4"} Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.385246 4687 scope.go:117] "RemoveContainer" containerID="1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.386983 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.386982 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance8363-account-delete-ds2fg" event={"ID":"2ed5beb8-6653-416b-b40d-4ee21cdd1568","Type":"ContainerDied","Data":"66d6d9061ed15cdd1dcca8a80ace0932871459f206be6e8c5631f4b0ece13c40"} Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.387019 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d6d9061ed15cdd1dcca8a80ace0932871459f206be6e8c5631f4b0ece13c40" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.390392 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerID="0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481" exitCode=0 Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.390452 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.390484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3b10021a-2d1a-45ce-855b-c83f06fe15d0","Type":"ContainerDied","Data":"0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481"} Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.390833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"3b10021a-2d1a-45ce-855b-c83f06fe15d0","Type":"ContainerDied","Data":"9abe1e0c55ac7fd3741e100320c62d8ef210303b5e0d071bf2e64e6aee5ec807"} Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.391247 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.407826 4687 scope.go:117] "RemoveContainer" containerID="e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.434702 4687 scope.go:117] "RemoveContainer" containerID="1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7" Jan 31 07:09:56 crc kubenswrapper[4687]: E0131 07:09:56.435172 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7\": container with ID starting with 1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7 not found: ID does not exist" containerID="1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.435217 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7"} err="failed to get container status \"1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7\": rpc error: code = NotFound desc = could not find container \"1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7\": container with ID starting with 1a3491e4f3de4c2f2bfc9cdd9d078d8111f558d3d5c4a6ed75fc0ab87c1779f7 not found: ID does not exist" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.435246 4687 scope.go:117] "RemoveContainer" containerID="e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f" Jan 31 07:09:56 crc kubenswrapper[4687]: E0131 07:09:56.435920 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f\": container with ID starting with e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f not found: ID does not exist" containerID="e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.435944 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f"} err="failed to get container status \"e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f\": rpc error: code = NotFound desc = could not find container \"e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f\": container with ID starting with e286ce491a9f5d586a153f68e4937328f0c75c03d116d97636fa1eb11f744a5f not found: ID does not exist" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.435961 4687 scope.go:117] "RemoveContainer" containerID="0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.444929 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.453442 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.462459 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.464228 4687 scope.go:117] "RemoveContainer" containerID="782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.469473 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.471546 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.471572 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.478735 4687 scope.go:117] "RemoveContainer" containerID="0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481" Jan 31 07:09:56 crc kubenswrapper[4687]: E0131 07:09:56.479359 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481\": container with ID starting with 0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481 not found: ID does not exist" containerID="0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.479396 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481"} err="failed to get container status \"0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481\": rpc error: code = NotFound desc = could not find container \"0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481\": container with ID starting with 0112455a1956841a26a7c824fb3f602c55f76d3431b009109bdc0fb0a3976481 not found: ID does not exist" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.479432 4687 scope.go:117] "RemoveContainer" containerID="782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3" Jan 31 07:09:56 crc kubenswrapper[4687]: E0131 07:09:56.479649 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3\": container with ID starting with 782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3 not found: ID does not exist" containerID="782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3" Jan 31 07:09:56 crc kubenswrapper[4687]: I0131 07:09:56.479666 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3"} err="failed to get container status \"782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3\": rpc error: code = NotFound desc = could not find container \"782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3\": container with ID starting with 782097ca1955afe87099056432efbb11c58169f55e01d13b05cff3109d0810f3 not found: ID does not exist" Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.350753 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-b2ld2"] Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.358882 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-b2ld2"] Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.364669 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance8363-account-delete-ds2fg"] Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.369570 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-8363-account-create-update-9htwj"] Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.374115 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance8363-account-delete-ds2fg"] Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.378261 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-8363-account-create-update-9htwj"] Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.604086 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:09:57 crc kubenswrapper[4687]: E0131 07:09:57.604293 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.612003 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04701a36-4402-409c-86fb-4d4240226b7b" path="/var/lib/kubelet/pods/04701a36-4402-409c-86fb-4d4240226b7b/volumes" Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.612592 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed5beb8-6653-416b-b40d-4ee21cdd1568" path="/var/lib/kubelet/pods/2ed5beb8-6653-416b-b40d-4ee21cdd1568/volumes" Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.613116 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" path="/var/lib/kubelet/pods/3b10021a-2d1a-45ce-855b-c83f06fe15d0/volumes" Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.614267 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5072d38f-e7e3-4f83-a1d0-9220fabfd685" path="/var/lib/kubelet/pods/5072d38f-e7e3-4f83-a1d0-9220fabfd685/volumes" Jan 31 07:09:57 crc kubenswrapper[4687]: I0131 07:09:57.614838 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" path="/var/lib/kubelet/pods/bd5490fd-a889-4b5c-bc5e-bd6c7dbee499/volumes" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.530508 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-587zg"] Jan 31 07:09:58 crc kubenswrapper[4687]: E0131 07:09:58.530824 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-httpd" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.530844 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-httpd" Jan 31 07:09:58 crc kubenswrapper[4687]: E0131 07:09:58.530859 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed5beb8-6653-416b-b40d-4ee21cdd1568" containerName="mariadb-account-delete" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.530867 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed5beb8-6653-416b-b40d-4ee21cdd1568" containerName="mariadb-account-delete" Jan 31 07:09:58 crc kubenswrapper[4687]: E0131 07:09:58.530899 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16bf7b67-a057-4dcc-8c5d-8879e73a2932" containerName="openstackclient" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.530908 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="16bf7b67-a057-4dcc-8c5d-8879e73a2932" containerName="openstackclient" Jan 31 07:09:58 crc kubenswrapper[4687]: E0131 07:09:58.530920 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerName="glance-httpd" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.530928 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerName="glance-httpd" Jan 31 07:09:58 crc kubenswrapper[4687]: E0131 07:09:58.530942 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-log" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.530949 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-log" Jan 31 07:09:58 crc kubenswrapper[4687]: E0131 07:09:58.530962 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerName="glance-log" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.530969 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerName="glance-log" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.531129 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-httpd" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.531145 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerName="glance-log" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.531156 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed5beb8-6653-416b-b40d-4ee21cdd1568" containerName="mariadb-account-delete" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.531175 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b10021a-2d1a-45ce-855b-c83f06fe15d0" containerName="glance-httpd" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.531187 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5490fd-a889-4b5c-bc5e-bd6c7dbee499" containerName="glance-log" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.531201 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="16bf7b67-a057-4dcc-8c5d-8879e73a2932" containerName="openstackclient" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.531757 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.540033 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-7fb4-account-create-update-8xj92"] Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.541364 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.544143 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.548487 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-587zg"] Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.571998 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-7fb4-account-create-update-8xj92"] Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.596586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp46g\" (UniqueName: \"kubernetes.io/projected/f66527f6-3688-4da0-b142-5b2a4d6837c4-kube-api-access-bp46g\") pod \"glance-7fb4-account-create-update-8xj92\" (UID: \"f66527f6-3688-4da0-b142-5b2a4d6837c4\") " pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.596640 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5729755f-9a6f-44fb-9b36-fbff7c52a62c-operator-scripts\") pod \"glance-db-create-587zg\" (UID: \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\") " pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.596667 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5fzj\" (UniqueName: \"kubernetes.io/projected/5729755f-9a6f-44fb-9b36-fbff7c52a62c-kube-api-access-t5fzj\") pod \"glance-db-create-587zg\" (UID: \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\") " pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.596785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66527f6-3688-4da0-b142-5b2a4d6837c4-operator-scripts\") pod \"glance-7fb4-account-create-update-8xj92\" (UID: \"f66527f6-3688-4da0-b142-5b2a4d6837c4\") " pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.698194 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5729755f-9a6f-44fb-9b36-fbff7c52a62c-operator-scripts\") pod \"glance-db-create-587zg\" (UID: \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\") " pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.698272 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5fzj\" (UniqueName: \"kubernetes.io/projected/5729755f-9a6f-44fb-9b36-fbff7c52a62c-kube-api-access-t5fzj\") pod \"glance-db-create-587zg\" (UID: \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\") " pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.698347 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66527f6-3688-4da0-b142-5b2a4d6837c4-operator-scripts\") pod \"glance-7fb4-account-create-update-8xj92\" (UID: \"f66527f6-3688-4da0-b142-5b2a4d6837c4\") " pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.698599 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp46g\" (UniqueName: \"kubernetes.io/projected/f66527f6-3688-4da0-b142-5b2a4d6837c4-kube-api-access-bp46g\") pod \"glance-7fb4-account-create-update-8xj92\" (UID: \"f66527f6-3688-4da0-b142-5b2a4d6837c4\") " pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.699078 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66527f6-3688-4da0-b142-5b2a4d6837c4-operator-scripts\") pod \"glance-7fb4-account-create-update-8xj92\" (UID: \"f66527f6-3688-4da0-b142-5b2a4d6837c4\") " pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.699160 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5729755f-9a6f-44fb-9b36-fbff7c52a62c-operator-scripts\") pod \"glance-db-create-587zg\" (UID: \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\") " pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.716774 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5fzj\" (UniqueName: \"kubernetes.io/projected/5729755f-9a6f-44fb-9b36-fbff7c52a62c-kube-api-access-t5fzj\") pod \"glance-db-create-587zg\" (UID: \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\") " pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.724021 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp46g\" (UniqueName: \"kubernetes.io/projected/f66527f6-3688-4da0-b142-5b2a4d6837c4-kube-api-access-bp46g\") pod \"glance-7fb4-account-create-update-8xj92\" (UID: \"f66527f6-3688-4da0-b142-5b2a4d6837c4\") " pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.855022 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:09:58 crc kubenswrapper[4687]: I0131 07:09:58.871655 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:09:59 crc kubenswrapper[4687]: I0131 07:09:59.263081 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-587zg"] Jan 31 07:09:59 crc kubenswrapper[4687]: W0131 07:09:59.265642 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5729755f_9a6f_44fb_9b36_fbff7c52a62c.slice/crio-1d2e5a722c0014b38ebca512714dc451a70cc049802800045809ec22436d373e WatchSource:0}: Error finding container 1d2e5a722c0014b38ebca512714dc451a70cc049802800045809ec22436d373e: Status 404 returned error can't find the container with id 1d2e5a722c0014b38ebca512714dc451a70cc049802800045809ec22436d373e Jan 31 07:09:59 crc kubenswrapper[4687]: I0131 07:09:59.322124 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-7fb4-account-create-update-8xj92"] Jan 31 07:09:59 crc kubenswrapper[4687]: W0131 07:09:59.328492 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66527f6_3688_4da0_b142_5b2a4d6837c4.slice/crio-000e8a0b28f67aef793b652f90279deb22ee3fb2c1b5024f16e4b4274145af0b WatchSource:0}: Error finding container 000e8a0b28f67aef793b652f90279deb22ee3fb2c1b5024f16e4b4274145af0b: Status 404 returned error can't find the container with id 000e8a0b28f67aef793b652f90279deb22ee3fb2c1b5024f16e4b4274145af0b Jan 31 07:09:59 crc kubenswrapper[4687]: I0131 07:09:59.412885 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" event={"ID":"f66527f6-3688-4da0-b142-5b2a4d6837c4","Type":"ContainerStarted","Data":"000e8a0b28f67aef793b652f90279deb22ee3fb2c1b5024f16e4b4274145af0b"} Jan 31 07:09:59 crc kubenswrapper[4687]: I0131 07:09:59.414324 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-587zg" event={"ID":"5729755f-9a6f-44fb-9b36-fbff7c52a62c","Type":"ContainerStarted","Data":"1d2e5a722c0014b38ebca512714dc451a70cc049802800045809ec22436d373e"} Jan 31 07:10:00 crc kubenswrapper[4687]: I0131 07:10:00.421614 4687 generic.go:334] "Generic (PLEG): container finished" podID="f66527f6-3688-4da0-b142-5b2a4d6837c4" containerID="69fa6ca1f95a368a0edc97c59b35e4695761b243b8efc226918947e098854a57" exitCode=0 Jan 31 07:10:00 crc kubenswrapper[4687]: I0131 07:10:00.421694 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" event={"ID":"f66527f6-3688-4da0-b142-5b2a4d6837c4","Type":"ContainerDied","Data":"69fa6ca1f95a368a0edc97c59b35e4695761b243b8efc226918947e098854a57"} Jan 31 07:10:00 crc kubenswrapper[4687]: I0131 07:10:00.424213 4687 generic.go:334] "Generic (PLEG): container finished" podID="5729755f-9a6f-44fb-9b36-fbff7c52a62c" containerID="62112938a0f6beba916d6eb94597064a766027b189ac6b309f2ff9091fa3d445" exitCode=0 Jan 31 07:10:00 crc kubenswrapper[4687]: I0131 07:10:00.424273 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-587zg" event={"ID":"5729755f-9a6f-44fb-9b36-fbff7c52a62c","Type":"ContainerDied","Data":"62112938a0f6beba916d6eb94597064a766027b189ac6b309f2ff9091fa3d445"} Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.751992 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.758463 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.842590 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66527f6-3688-4da0-b142-5b2a4d6837c4-operator-scripts\") pod \"f66527f6-3688-4da0-b142-5b2a4d6837c4\" (UID: \"f66527f6-3688-4da0-b142-5b2a4d6837c4\") " Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.842661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5729755f-9a6f-44fb-9b36-fbff7c52a62c-operator-scripts\") pod \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\" (UID: \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\") " Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.842704 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp46g\" (UniqueName: \"kubernetes.io/projected/f66527f6-3688-4da0-b142-5b2a4d6837c4-kube-api-access-bp46g\") pod \"f66527f6-3688-4da0-b142-5b2a4d6837c4\" (UID: \"f66527f6-3688-4da0-b142-5b2a4d6837c4\") " Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.842774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5fzj\" (UniqueName: \"kubernetes.io/projected/5729755f-9a6f-44fb-9b36-fbff7c52a62c-kube-api-access-t5fzj\") pod \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\" (UID: \"5729755f-9a6f-44fb-9b36-fbff7c52a62c\") " Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.843609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f66527f6-3688-4da0-b142-5b2a4d6837c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f66527f6-3688-4da0-b142-5b2a4d6837c4" (UID: "f66527f6-3688-4da0-b142-5b2a4d6837c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.843655 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5729755f-9a6f-44fb-9b36-fbff7c52a62c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5729755f-9a6f-44fb-9b36-fbff7c52a62c" (UID: "5729755f-9a6f-44fb-9b36-fbff7c52a62c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.848562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66527f6-3688-4da0-b142-5b2a4d6837c4-kube-api-access-bp46g" (OuterVolumeSpecName: "kube-api-access-bp46g") pod "f66527f6-3688-4da0-b142-5b2a4d6837c4" (UID: "f66527f6-3688-4da0-b142-5b2a4d6837c4"). InnerVolumeSpecName "kube-api-access-bp46g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:01 crc kubenswrapper[4687]: I0131 07:10:01.849925 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5729755f-9a6f-44fb-9b36-fbff7c52a62c-kube-api-access-t5fzj" (OuterVolumeSpecName: "kube-api-access-t5fzj") pod "5729755f-9a6f-44fb-9b36-fbff7c52a62c" (UID: "5729755f-9a6f-44fb-9b36-fbff7c52a62c"). InnerVolumeSpecName "kube-api-access-t5fzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.183679 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66527f6-3688-4da0-b142-5b2a4d6837c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.183748 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5729755f-9a6f-44fb-9b36-fbff7c52a62c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.183766 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp46g\" (UniqueName: \"kubernetes.io/projected/f66527f6-3688-4da0-b142-5b2a4d6837c4-kube-api-access-bp46g\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.183785 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5fzj\" (UniqueName: \"kubernetes.io/projected/5729755f-9a6f-44fb-9b36-fbff7c52a62c-kube-api-access-t5fzj\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.439598 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" event={"ID":"f66527f6-3688-4da0-b142-5b2a4d6837c4","Type":"ContainerDied","Data":"000e8a0b28f67aef793b652f90279deb22ee3fb2c1b5024f16e4b4274145af0b"} Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.439655 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="000e8a0b28f67aef793b652f90279deb22ee3fb2c1b5024f16e4b4274145af0b" Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.439713 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-7fb4-account-create-update-8xj92" Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.444045 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-587zg" event={"ID":"5729755f-9a6f-44fb-9b36-fbff7c52a62c","Type":"ContainerDied","Data":"1d2e5a722c0014b38ebca512714dc451a70cc049802800045809ec22436d373e"} Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.444077 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d2e5a722c0014b38ebca512714dc451a70cc049802800045809ec22436d373e" Jan 31 07:10:02 crc kubenswrapper[4687]: I0131 07:10:02.444120 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-587zg" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.669714 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-6r9nz"] Jan 31 07:10:03 crc kubenswrapper[4687]: E0131 07:10:03.670397 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66527f6-3688-4da0-b142-5b2a4d6837c4" containerName="mariadb-account-create-update" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.670436 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66527f6-3688-4da0-b142-5b2a4d6837c4" containerName="mariadb-account-create-update" Jan 31 07:10:03 crc kubenswrapper[4687]: E0131 07:10:03.670475 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5729755f-9a6f-44fb-9b36-fbff7c52a62c" containerName="mariadb-database-create" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.670483 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5729755f-9a6f-44fb-9b36-fbff7c52a62c" containerName="mariadb-database-create" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.670682 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66527f6-3688-4da0-b142-5b2a4d6837c4" containerName="mariadb-account-create-update" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.670701 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5729755f-9a6f-44fb-9b36-fbff7c52a62c" containerName="mariadb-database-create" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.671317 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.674071 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.674791 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-kncz5" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.674979 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.676380 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-6r9nz"] Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.712635 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-config-data\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.712921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-db-sync-config-data\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.713081 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-combined-ca-bundle\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.713219 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmddk\" (UniqueName: \"kubernetes.io/projected/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-kube-api-access-tmddk\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.814907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-combined-ca-bundle\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.814977 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmddk\" (UniqueName: \"kubernetes.io/projected/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-kube-api-access-tmddk\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.815057 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-config-data\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.815084 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-db-sync-config-data\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.821166 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-combined-ca-bundle\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.821295 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-config-data\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.832004 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-db-sync-config-data\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.844036 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmddk\" (UniqueName: \"kubernetes.io/projected/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-kube-api-access-tmddk\") pod \"glance-db-sync-6r9nz\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:03 crc kubenswrapper[4687]: I0131 07:10:03.992824 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:05 crc kubenswrapper[4687]: I0131 07:10:05.024967 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-6r9nz"] Jan 31 07:10:05 crc kubenswrapper[4687]: I0131 07:10:05.470870 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-6r9nz" event={"ID":"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4","Type":"ContainerStarted","Data":"c08089d1fbbef61eba30da69acb22ef71236df1a359a6d0a57cacc0366e2d09c"} Jan 31 07:10:06 crc kubenswrapper[4687]: I0131 07:10:06.479993 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-6r9nz" event={"ID":"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4","Type":"ContainerStarted","Data":"e3336cb60c46d96b899d59c9b6ce3d2a13ae7b11bfae8a5041e1cb251a81075c"} Jan 31 07:10:09 crc kubenswrapper[4687]: I0131 07:10:09.505973 4687 generic.go:334] "Generic (PLEG): container finished" podID="a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" containerID="e3336cb60c46d96b899d59c9b6ce3d2a13ae7b11bfae8a5041e1cb251a81075c" exitCode=0 Jan 31 07:10:09 crc kubenswrapper[4687]: I0131 07:10:09.506081 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-6r9nz" event={"ID":"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4","Type":"ContainerDied","Data":"e3336cb60c46d96b899d59c9b6ce3d2a13ae7b11bfae8a5041e1cb251a81075c"} Jan 31 07:10:09 crc kubenswrapper[4687]: I0131 07:10:09.602982 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:10:09 crc kubenswrapper[4687]: E0131 07:10:09.603270 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:10:10 crc kubenswrapper[4687]: I0131 07:10:10.948226 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.106340 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-config-data\") pod \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.106431 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmddk\" (UniqueName: \"kubernetes.io/projected/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-kube-api-access-tmddk\") pod \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.106564 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-combined-ca-bundle\") pod \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.106622 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-db-sync-config-data\") pod \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\" (UID: \"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4\") " Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.123194 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-kube-api-access-tmddk" (OuterVolumeSpecName: "kube-api-access-tmddk") pod "a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" (UID: "a7b2a1f7-85f5-473a-80dc-e9b734e25bd4"). InnerVolumeSpecName "kube-api-access-tmddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.123306 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" (UID: "a7b2a1f7-85f5-473a-80dc-e9b734e25bd4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.133017 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" (UID: "a7b2a1f7-85f5-473a-80dc-e9b734e25bd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.153145 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-config-data" (OuterVolumeSpecName: "config-data") pod "a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" (UID: "a7b2a1f7-85f5-473a-80dc-e9b734e25bd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.208479 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.208520 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmddk\" (UniqueName: \"kubernetes.io/projected/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-kube-api-access-tmddk\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.208537 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.208550 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.530220 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-6r9nz" event={"ID":"a7b2a1f7-85f5-473a-80dc-e9b734e25bd4","Type":"ContainerDied","Data":"c08089d1fbbef61eba30da69acb22ef71236df1a359a6d0a57cacc0366e2d09c"} Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.530267 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08089d1fbbef61eba30da69acb22ef71236df1a359a6d0a57cacc0366e2d09c" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.530435 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-6r9nz" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.865283 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:11 crc kubenswrapper[4687]: E0131 07:10:11.865661 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" containerName="glance-db-sync" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.865682 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" containerName="glance-db-sync" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.865837 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" containerName="glance-db-sync" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.866504 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.868398 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-public-svc" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.868953 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.869099 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.869230 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-kncz5" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.869758 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-internal-svc" Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.878139 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:11 crc kubenswrapper[4687]: I0131 07:10:11.880268 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022124 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-httpd-run\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022188 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-scripts\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-config-data\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022271 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022538 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022696 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022729 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-logs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.022975 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxs7\" (UniqueName: \"kubernetes.io/projected/5610fd57-6932-437e-9858-1c43241268b8-kube-api-access-zcxs7\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.124817 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-config-data\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.124877 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.124942 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.124991 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.125014 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-logs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.125048 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcxs7\" (UniqueName: \"kubernetes.io/projected/5610fd57-6932-437e-9858-1c43241268b8-kube-api-access-zcxs7\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.125094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-httpd-run\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.125122 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.125157 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-scripts\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.125709 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.125921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-logs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.125991 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-httpd-run\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.130017 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-scripts\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.130494 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.130749 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.131051 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-config-data\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.131469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.144391 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcxs7\" (UniqueName: \"kubernetes.io/projected/5610fd57-6932-437e-9858-1c43241268b8-kube-api-access-zcxs7\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.152294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.187037 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.667490 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:12 crc kubenswrapper[4687]: I0131 07:10:12.993622 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:13 crc kubenswrapper[4687]: I0131 07:10:13.545565 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5610fd57-6932-437e-9858-1c43241268b8","Type":"ContainerStarted","Data":"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551"} Jan 31 07:10:13 crc kubenswrapper[4687]: I0131 07:10:13.545853 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5610fd57-6932-437e-9858-1c43241268b8","Type":"ContainerStarted","Data":"3f5e4c6ba6b805aa9117d183fe296b54ab60ff8a81889f9b43464e2a36ad2a1b"} Jan 31 07:10:14 crc kubenswrapper[4687]: I0131 07:10:14.555286 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5610fd57-6932-437e-9858-1c43241268b8","Type":"ContainerStarted","Data":"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b"} Jan 31 07:10:14 crc kubenswrapper[4687]: I0131 07:10:14.555482 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="5610fd57-6932-437e-9858-1c43241268b8" containerName="glance-log" containerID="cri-o://71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551" gracePeriod=30 Jan 31 07:10:14 crc kubenswrapper[4687]: I0131 07:10:14.555836 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="5610fd57-6932-437e-9858-1c43241268b8" containerName="glance-httpd" containerID="cri-o://adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b" gracePeriod=30 Jan 31 07:10:14 crc kubenswrapper[4687]: I0131 07:10:14.576890 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=3.576869946 podStartE2EDuration="3.576869946s" podCreationTimestamp="2026-01-31 07:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:10:14.575664753 +0000 UTC m=+1640.852924328" watchObservedRunningTime="2026-01-31 07:10:14.576869946 +0000 UTC m=+1640.854129521" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.117214 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.317923 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-config-data\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.317963 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-public-tls-certs\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.318010 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcxs7\" (UniqueName: \"kubernetes.io/projected/5610fd57-6932-437e-9858-1c43241268b8-kube-api-access-zcxs7\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.318051 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-scripts\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.318093 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-logs\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.318132 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.318158 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-combined-ca-bundle\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.318185 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-internal-tls-certs\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.318211 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-httpd-run\") pod \"5610fd57-6932-437e-9858-1c43241268b8\" (UID: \"5610fd57-6932-437e-9858-1c43241268b8\") " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.318987 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.319159 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-logs" (OuterVolumeSpecName: "logs") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.329817 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-scripts" (OuterVolumeSpecName: "scripts") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.329836 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5610fd57-6932-437e-9858-1c43241268b8-kube-api-access-zcxs7" (OuterVolumeSpecName: "kube-api-access-zcxs7") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "kube-api-access-zcxs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.329932 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.339107 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.355062 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.356024 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-config-data" (OuterVolumeSpecName: "config-data") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.377609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5610fd57-6932-437e-9858-1c43241268b8" (UID: "5610fd57-6932-437e-9858-1c43241268b8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419452 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419488 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419497 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419510 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcxs7\" (UniqueName: \"kubernetes.io/projected/5610fd57-6932-437e-9858-1c43241268b8-kube-api-access-zcxs7\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419519 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419527 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5610fd57-6932-437e-9858-1c43241268b8-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419560 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419578 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.419651 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5610fd57-6932-437e-9858-1c43241268b8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.432991 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.520500 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.564221 4687 generic.go:334] "Generic (PLEG): container finished" podID="5610fd57-6932-437e-9858-1c43241268b8" containerID="adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b" exitCode=0 Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.564260 4687 generic.go:334] "Generic (PLEG): container finished" podID="5610fd57-6932-437e-9858-1c43241268b8" containerID="71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551" exitCode=143 Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.564276 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5610fd57-6932-437e-9858-1c43241268b8","Type":"ContainerDied","Data":"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b"} Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.564373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5610fd57-6932-437e-9858-1c43241268b8","Type":"ContainerDied","Data":"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551"} Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.564392 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"5610fd57-6932-437e-9858-1c43241268b8","Type":"ContainerDied","Data":"3f5e4c6ba6b805aa9117d183fe296b54ab60ff8a81889f9b43464e2a36ad2a1b"} Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.564441 4687 scope.go:117] "RemoveContainer" containerID="adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.565250 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.593093 4687 scope.go:117] "RemoveContainer" containerID="71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.595834 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.614937 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.617101 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:15 crc kubenswrapper[4687]: E0131 07:10:15.617364 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5610fd57-6932-437e-9858-1c43241268b8" containerName="glance-log" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.617381 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5610fd57-6932-437e-9858-1c43241268b8" containerName="glance-log" Jan 31 07:10:15 crc kubenswrapper[4687]: E0131 07:10:15.617420 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5610fd57-6932-437e-9858-1c43241268b8" containerName="glance-httpd" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.617428 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5610fd57-6932-437e-9858-1c43241268b8" containerName="glance-httpd" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.617569 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5610fd57-6932-437e-9858-1c43241268b8" containerName="glance-httpd" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.617591 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5610fd57-6932-437e-9858-1c43241268b8" containerName="glance-log" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.618374 4687 scope.go:117] "RemoveContainer" containerID="adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.618581 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: E0131 07:10:15.623018 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b\": container with ID starting with adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b not found: ID does not exist" containerID="adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.623080 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b"} err="failed to get container status \"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b\": rpc error: code = NotFound desc = could not find container \"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b\": container with ID starting with adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b not found: ID does not exist" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.623112 4687 scope.go:117] "RemoveContainer" containerID="71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.623913 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-public-svc" Jan 31 07:10:15 crc kubenswrapper[4687]: E0131 07:10:15.624144 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551\": container with ID starting with 71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551 not found: ID does not exist" containerID="71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624195 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551"} err="failed to get container status \"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551\": rpc error: code = NotFound desc = could not find container \"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551\": container with ID starting with 71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551 not found: ID does not exist" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624227 4687 scope.go:117] "RemoveContainer" containerID="adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624545 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-kncz5" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624643 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b"} err="failed to get container status \"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b\": rpc error: code = NotFound desc = could not find container \"adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b\": container with ID starting with adc1a0be1894faf5cbd54e3f9311da95f9ca78ba808a3a7fbbc57b616f86141b not found: ID does not exist" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624677 4687 scope.go:117] "RemoveContainer" containerID="71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624663 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-internal-svc" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624776 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624690 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.624951 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.625058 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551"} err="failed to get container status \"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551\": rpc error: code = NotFound desc = could not find container \"71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551\": container with ID starting with 71f3513ef19ca9e2d82925fbb78543c3dc4597c34899f43540bf94e312a2d551 not found: ID does not exist" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.630845 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.724924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-logs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.724997 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-config-data\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.725034 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-scripts\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.725069 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-httpd-run\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.725088 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.725114 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.725129 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdr88\" (UniqueName: \"kubernetes.io/projected/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-kube-api-access-vdr88\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.725315 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.725480 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-logs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827197 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-config-data\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827244 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-scripts\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-httpd-run\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827346 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827367 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdr88\" (UniqueName: \"kubernetes.io/projected/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-kube-api-access-vdr88\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827427 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827465 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.827786 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.828147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-httpd-run\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.828214 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-logs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.831717 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.832072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-scripts\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.832530 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-config-data\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.841191 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.844819 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.849774 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdr88\" (UniqueName: \"kubernetes.io/projected/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-kube-api-access-vdr88\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.851131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-0\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:15 crc kubenswrapper[4687]: I0131 07:10:15.947875 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:16 crc kubenswrapper[4687]: I0131 07:10:16.356052 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:16 crc kubenswrapper[4687]: W0131 07:10:16.365130 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd03c159f_ff6e_49e8_a198_e0eb96b6dcd5.slice/crio-f6a03d2ba195c6b89c2f6f5ac4be86dbcd19e96417ee26d377701378f8fedb3b WatchSource:0}: Error finding container f6a03d2ba195c6b89c2f6f5ac4be86dbcd19e96417ee26d377701378f8fedb3b: Status 404 returned error can't find the container with id f6a03d2ba195c6b89c2f6f5ac4be86dbcd19e96417ee26d377701378f8fedb3b Jan 31 07:10:16 crc kubenswrapper[4687]: I0131 07:10:16.585311 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5","Type":"ContainerStarted","Data":"f6a03d2ba195c6b89c2f6f5ac4be86dbcd19e96417ee26d377701378f8fedb3b"} Jan 31 07:10:17 crc kubenswrapper[4687]: I0131 07:10:17.596224 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5","Type":"ContainerStarted","Data":"a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d"} Jan 31 07:10:17 crc kubenswrapper[4687]: I0131 07:10:17.596801 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5","Type":"ContainerStarted","Data":"0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98"} Jan 31 07:10:17 crc kubenswrapper[4687]: I0131 07:10:17.613871 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5610fd57-6932-437e-9858-1c43241268b8" path="/var/lib/kubelet/pods/5610fd57-6932-437e-9858-1c43241268b8/volumes" Jan 31 07:10:17 crc kubenswrapper[4687]: I0131 07:10:17.618052 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.618029747 podStartE2EDuration="2.618029747s" podCreationTimestamp="2026-01-31 07:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:10:17.612256079 +0000 UTC m=+1643.889515654" watchObservedRunningTime="2026-01-31 07:10:17.618029747 +0000 UTC m=+1643.895289322" Jan 31 07:10:20 crc kubenswrapper[4687]: I0131 07:10:20.603265 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:10:20 crc kubenswrapper[4687]: E0131 07:10:20.604014 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:10:25 crc kubenswrapper[4687]: I0131 07:10:25.948063 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:25 crc kubenswrapper[4687]: I0131 07:10:25.948350 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:25 crc kubenswrapper[4687]: I0131 07:10:25.973846 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:25 crc kubenswrapper[4687]: I0131 07:10:25.988143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:26 crc kubenswrapper[4687]: I0131 07:10:26.681800 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:26 crc kubenswrapper[4687]: I0131 07:10:26.681840 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:29 crc kubenswrapper[4687]: I0131 07:10:29.010756 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:29 crc kubenswrapper[4687]: I0131 07:10:29.011369 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:10:29 crc kubenswrapper[4687]: I0131 07:10:29.012145 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:29 crc kubenswrapper[4687]: I0131 07:10:29.940830 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-6r9nz"] Jan 31 07:10:29 crc kubenswrapper[4687]: I0131 07:10:29.946103 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-6r9nz"] Jan 31 07:10:29 crc kubenswrapper[4687]: I0131 07:10:29.986351 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance7fb4-account-delete-b8lh2"] Jan 31 07:10:29 crc kubenswrapper[4687]: I0131 07:10:29.987439 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:29 crc kubenswrapper[4687]: I0131 07:10:29.999755 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance7fb4-account-delete-b8lh2"] Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.034076 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.140470 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wq8\" (UniqueName: \"kubernetes.io/projected/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-kube-api-access-d7wq8\") pod \"glance7fb4-account-delete-b8lh2\" (UID: \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\") " pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.140795 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-operator-scripts\") pod \"glance7fb4-account-delete-b8lh2\" (UID: \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\") " pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.242194 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wq8\" (UniqueName: \"kubernetes.io/projected/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-kube-api-access-d7wq8\") pod \"glance7fb4-account-delete-b8lh2\" (UID: \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\") " pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.242245 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-operator-scripts\") pod \"glance7fb4-account-delete-b8lh2\" (UID: \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\") " pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.243041 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-operator-scripts\") pod \"glance7fb4-account-delete-b8lh2\" (UID: \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\") " pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.277910 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wq8\" (UniqueName: \"kubernetes.io/projected/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-kube-api-access-d7wq8\") pod \"glance7fb4-account-delete-b8lh2\" (UID: \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\") " pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.355335 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.709947 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerName="glance-httpd" containerID="cri-o://a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d" gracePeriod=30 Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.710305 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerName="glance-log" containerID="cri-o://0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98" gracePeriod=30 Jan 31 07:10:30 crc kubenswrapper[4687]: I0131 07:10:30.796080 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance7fb4-account-delete-b8lh2"] Jan 31 07:10:30 crc kubenswrapper[4687]: W0131 07:10:30.797195 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1b7e298_f857_4c9f_b01a_1ffc09832cf8.slice/crio-e0274185b777ba465c12eec7fedc9c997882b6af3b58a5ea7ece3b362107a2ee WatchSource:0}: Error finding container e0274185b777ba465c12eec7fedc9c997882b6af3b58a5ea7ece3b362107a2ee: Status 404 returned error can't find the container with id e0274185b777ba465c12eec7fedc9c997882b6af3b58a5ea7ece3b362107a2ee Jan 31 07:10:31 crc kubenswrapper[4687]: I0131 07:10:31.612981 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7b2a1f7-85f5-473a-80dc-e9b734e25bd4" path="/var/lib/kubelet/pods/a7b2a1f7-85f5-473a-80dc-e9b734e25bd4/volumes" Jan 31 07:10:31 crc kubenswrapper[4687]: I0131 07:10:31.717449 4687 generic.go:334] "Generic (PLEG): container finished" podID="b1b7e298-f857-4c9f-b01a-1ffc09832cf8" containerID="ccbf357c32a953a52079ade34a4d95cc4e18dec834e48ee3442cac1445c26404" exitCode=0 Jan 31 07:10:31 crc kubenswrapper[4687]: I0131 07:10:31.717569 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" event={"ID":"b1b7e298-f857-4c9f-b01a-1ffc09832cf8","Type":"ContainerDied","Data":"ccbf357c32a953a52079ade34a4d95cc4e18dec834e48ee3442cac1445c26404"} Jan 31 07:10:31 crc kubenswrapper[4687]: I0131 07:10:31.717631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" event={"ID":"b1b7e298-f857-4c9f-b01a-1ffc09832cf8","Type":"ContainerStarted","Data":"e0274185b777ba465c12eec7fedc9c997882b6af3b58a5ea7ece3b362107a2ee"} Jan 31 07:10:31 crc kubenswrapper[4687]: I0131 07:10:31.719227 4687 generic.go:334] "Generic (PLEG): container finished" podID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerID="0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98" exitCode=143 Jan 31 07:10:31 crc kubenswrapper[4687]: I0131 07:10:31.719274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5","Type":"ContainerDied","Data":"0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98"} Jan 31 07:10:32 crc kubenswrapper[4687]: I0131 07:10:32.988932 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.087772 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wq8\" (UniqueName: \"kubernetes.io/projected/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-kube-api-access-d7wq8\") pod \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\" (UID: \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\") " Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.087834 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-operator-scripts\") pod \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\" (UID: \"b1b7e298-f857-4c9f-b01a-1ffc09832cf8\") " Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.095135 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-kube-api-access-d7wq8" (OuterVolumeSpecName: "kube-api-access-d7wq8") pod "b1b7e298-f857-4c9f-b01a-1ffc09832cf8" (UID: "b1b7e298-f857-4c9f-b01a-1ffc09832cf8"). InnerVolumeSpecName "kube-api-access-d7wq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.105384 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1b7e298-f857-4c9f-b01a-1ffc09832cf8" (UID: "b1b7e298-f857-4c9f-b01a-1ffc09832cf8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.189648 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7wq8\" (UniqueName: \"kubernetes.io/projected/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-kube-api-access-d7wq8\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.189828 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b7e298-f857-4c9f-b01a-1ffc09832cf8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.732723 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" event={"ID":"b1b7e298-f857-4c9f-b01a-1ffc09832cf8","Type":"ContainerDied","Data":"e0274185b777ba465c12eec7fedc9c997882b6af3b58a5ea7ece3b362107a2ee"} Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.733031 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0274185b777ba465c12eec7fedc9c997882b6af3b58a5ea7ece3b362107a2ee" Jan 31 07:10:33 crc kubenswrapper[4687]: I0131 07:10:33.732788 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance7fb4-account-delete-b8lh2" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.384993 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-internal-tls-certs\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508515 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-scripts\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508548 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-config-data\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508626 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-combined-ca-bundle\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508653 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-httpd-run\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508685 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-public-tls-certs\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508707 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdr88\" (UniqueName: \"kubernetes.io/projected/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-kube-api-access-vdr88\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508747 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-logs\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.508772 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\" (UID: \"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5\") " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.509637 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-logs" (OuterVolumeSpecName: "logs") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.509627 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.513557 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.514033 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-kube-api-access-vdr88" (OuterVolumeSpecName: "kube-api-access-vdr88") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "kube-api-access-vdr88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.514609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-scripts" (OuterVolumeSpecName: "scripts") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.529709 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.548401 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-config-data" (OuterVolumeSpecName: "config-data") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.548735 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.560263 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" (UID: "d03c159f-ff6e-49e8-a198-e0eb96b6dcd5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.603779 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:10:34 crc kubenswrapper[4687]: E0131 07:10:34.604084 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610537 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610569 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610583 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610591 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610602 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610610 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610619 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdr88\" (UniqueName: \"kubernetes.io/projected/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-kube-api-access-vdr88\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610629 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.610660 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.628537 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.712046 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.743440 4687 generic.go:334] "Generic (PLEG): container finished" podID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerID="a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d" exitCode=0 Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.743481 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5","Type":"ContainerDied","Data":"a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d"} Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.743511 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"d03c159f-ff6e-49e8-a198-e0eb96b6dcd5","Type":"ContainerDied","Data":"f6a03d2ba195c6b89c2f6f5ac4be86dbcd19e96417ee26d377701378f8fedb3b"} Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.743532 4687 scope.go:117] "RemoveContainer" containerID="a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.743536 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.793995 4687 scope.go:117] "RemoveContainer" containerID="0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.795076 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.802552 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.813217 4687 scope.go:117] "RemoveContainer" containerID="a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d" Jan 31 07:10:34 crc kubenswrapper[4687]: E0131 07:10:34.813703 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d\": container with ID starting with a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d not found: ID does not exist" containerID="a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.813748 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d"} err="failed to get container status \"a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d\": rpc error: code = NotFound desc = could not find container \"a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d\": container with ID starting with a7244e3b4fcbe9e9fcceb58971d42bd6a1d1a9d5867208f613becfd8222f3e6d not found: ID does not exist" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.813778 4687 scope.go:117] "RemoveContainer" containerID="0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98" Jan 31 07:10:34 crc kubenswrapper[4687]: E0131 07:10:34.814241 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98\": container with ID starting with 0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98 not found: ID does not exist" containerID="0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98" Jan 31 07:10:34 crc kubenswrapper[4687]: I0131 07:10:34.814319 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98"} err="failed to get container status \"0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98\": rpc error: code = NotFound desc = could not find container \"0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98\": container with ID starting with 0e2f36e38ee5106b5264f7443b9dbae78cf7a18366f0162de5c1c0c93a12ee98 not found: ID does not exist" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.007266 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-587zg"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.020081 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-587zg"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.026776 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance7fb4-account-delete-b8lh2"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.033329 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance7fb4-account-delete-b8lh2"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.038339 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-7fb4-account-create-update-8xj92"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.056877 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-7fb4-account-create-update-8xj92"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.611592 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5729755f-9a6f-44fb-9b36-fbff7c52a62c" path="/var/lib/kubelet/pods/5729755f-9a6f-44fb-9b36-fbff7c52a62c/volumes" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.612966 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b7e298-f857-4c9f-b01a-1ffc09832cf8" path="/var/lib/kubelet/pods/b1b7e298-f857-4c9f-b01a-1ffc09832cf8/volumes" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.613744 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" path="/var/lib/kubelet/pods/d03c159f-ff6e-49e8-a198-e0eb96b6dcd5/volumes" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.615066 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66527f6-3688-4da0-b142-5b2a4d6837c4" path="/var/lib/kubelet/pods/f66527f6-3688-4da0-b142-5b2a4d6837c4/volumes" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.642323 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-mb94x"] Jan 31 07:10:35 crc kubenswrapper[4687]: E0131 07:10:35.642603 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerName="glance-httpd" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.642615 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerName="glance-httpd" Jan 31 07:10:35 crc kubenswrapper[4687]: E0131 07:10:35.642638 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerName="glance-log" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.642646 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerName="glance-log" Jan 31 07:10:35 crc kubenswrapper[4687]: E0131 07:10:35.642655 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b7e298-f857-4c9f-b01a-1ffc09832cf8" containerName="mariadb-account-delete" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.642661 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b7e298-f857-4c9f-b01a-1ffc09832cf8" containerName="mariadb-account-delete" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.642788 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerName="glance-log" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.642800 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03c159f-ff6e-49e8-a198-e0eb96b6dcd5" containerName="glance-httpd" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.642813 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b7e298-f857-4c9f-b01a-1ffc09832cf8" containerName="mariadb-account-delete" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.643210 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.654123 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-mb94x"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.748530 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-7e39-account-create-update-nxcld"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.751431 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.756944 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-7e39-account-create-update-nxcld"] Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.758811 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.836010 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndbkv\" (UniqueName: \"kubernetes.io/projected/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-kube-api-access-ndbkv\") pod \"glance-db-create-mb94x\" (UID: \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\") " pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.836158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-operator-scripts\") pod \"glance-db-create-mb94x\" (UID: \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\") " pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.949474 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4plz7\" (UniqueName: \"kubernetes.io/projected/151bae23-bc79-469e-a56c-b8f85ca84e7d-kube-api-access-4plz7\") pod \"glance-7e39-account-create-update-nxcld\" (UID: \"151bae23-bc79-469e-a56c-b8f85ca84e7d\") " pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.949659 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bae23-bc79-469e-a56c-b8f85ca84e7d-operator-scripts\") pod \"glance-7e39-account-create-update-nxcld\" (UID: \"151bae23-bc79-469e-a56c-b8f85ca84e7d\") " pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.949894 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndbkv\" (UniqueName: \"kubernetes.io/projected/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-kube-api-access-ndbkv\") pod \"glance-db-create-mb94x\" (UID: \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\") " pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.949934 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-operator-scripts\") pod \"glance-db-create-mb94x\" (UID: \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\") " pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.951479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-operator-scripts\") pod \"glance-db-create-mb94x\" (UID: \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\") " pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:35 crc kubenswrapper[4687]: I0131 07:10:35.970622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndbkv\" (UniqueName: \"kubernetes.io/projected/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-kube-api-access-ndbkv\") pod \"glance-db-create-mb94x\" (UID: \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\") " pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.051605 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bae23-bc79-469e-a56c-b8f85ca84e7d-operator-scripts\") pod \"glance-7e39-account-create-update-nxcld\" (UID: \"151bae23-bc79-469e-a56c-b8f85ca84e7d\") " pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.051949 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4plz7\" (UniqueName: \"kubernetes.io/projected/151bae23-bc79-469e-a56c-b8f85ca84e7d-kube-api-access-4plz7\") pod \"glance-7e39-account-create-update-nxcld\" (UID: \"151bae23-bc79-469e-a56c-b8f85ca84e7d\") " pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.052638 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bae23-bc79-469e-a56c-b8f85ca84e7d-operator-scripts\") pod \"glance-7e39-account-create-update-nxcld\" (UID: \"151bae23-bc79-469e-a56c-b8f85ca84e7d\") " pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.068191 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4plz7\" (UniqueName: \"kubernetes.io/projected/151bae23-bc79-469e-a56c-b8f85ca84e7d-kube-api-access-4plz7\") pod \"glance-7e39-account-create-update-nxcld\" (UID: \"151bae23-bc79-469e-a56c-b8f85ca84e7d\") " pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.266254 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.367950 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.694541 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-mb94x"] Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.760102 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mb94x" event={"ID":"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76","Type":"ContainerStarted","Data":"07db1cd7cdd371510edec6e5ea5b8beb653afe6195e9d7c5d77d5bec7bbdfbb9"} Jan 31 07:10:36 crc kubenswrapper[4687]: I0131 07:10:36.814672 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-7e39-account-create-update-nxcld"] Jan 31 07:10:36 crc kubenswrapper[4687]: W0131 07:10:36.818798 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod151bae23_bc79_469e_a56c_b8f85ca84e7d.slice/crio-3e8d031b1670c4a16908f4d2ec0bad8a6f46dd5d4e99cf39648d5e3a2d718047 WatchSource:0}: Error finding container 3e8d031b1670c4a16908f4d2ec0bad8a6f46dd5d4e99cf39648d5e3a2d718047: Status 404 returned error can't find the container with id 3e8d031b1670c4a16908f4d2ec0bad8a6f46dd5d4e99cf39648d5e3a2d718047 Jan 31 07:10:37 crc kubenswrapper[4687]: I0131 07:10:37.766640 4687 generic.go:334] "Generic (PLEG): container finished" podID="b97fb5e7-2d73-4e8e-9c27-c222d4c23c76" containerID="f2c9eda8abca0dbbeadbfaa1a88b276fc5416fedfc321695a48322da8e838e87" exitCode=0 Jan 31 07:10:37 crc kubenswrapper[4687]: I0131 07:10:37.767017 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mb94x" event={"ID":"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76","Type":"ContainerDied","Data":"f2c9eda8abca0dbbeadbfaa1a88b276fc5416fedfc321695a48322da8e838e87"} Jan 31 07:10:37 crc kubenswrapper[4687]: I0131 07:10:37.768806 4687 generic.go:334] "Generic (PLEG): container finished" podID="151bae23-bc79-469e-a56c-b8f85ca84e7d" containerID="8997d9ed0a7d01fe070901aa7ad1c7cd27ad2fb20f2a3681c1a5a7fbfdb16824" exitCode=0 Jan 31 07:10:37 crc kubenswrapper[4687]: I0131 07:10:37.768854 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" event={"ID":"151bae23-bc79-469e-a56c-b8f85ca84e7d","Type":"ContainerDied","Data":"8997d9ed0a7d01fe070901aa7ad1c7cd27ad2fb20f2a3681c1a5a7fbfdb16824"} Jan 31 07:10:37 crc kubenswrapper[4687]: I0131 07:10:37.768875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" event={"ID":"151bae23-bc79-469e-a56c-b8f85ca84e7d","Type":"ContainerStarted","Data":"3e8d031b1670c4a16908f4d2ec0bad8a6f46dd5d4e99cf39648d5e3a2d718047"} Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.107840 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.113721 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.201994 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndbkv\" (UniqueName: \"kubernetes.io/projected/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-kube-api-access-ndbkv\") pod \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\" (UID: \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\") " Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.202073 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bae23-bc79-469e-a56c-b8f85ca84e7d-operator-scripts\") pod \"151bae23-bc79-469e-a56c-b8f85ca84e7d\" (UID: \"151bae23-bc79-469e-a56c-b8f85ca84e7d\") " Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.202141 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4plz7\" (UniqueName: \"kubernetes.io/projected/151bae23-bc79-469e-a56c-b8f85ca84e7d-kube-api-access-4plz7\") pod \"151bae23-bc79-469e-a56c-b8f85ca84e7d\" (UID: \"151bae23-bc79-469e-a56c-b8f85ca84e7d\") " Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.202158 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-operator-scripts\") pod \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\" (UID: \"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76\") " Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.203116 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b97fb5e7-2d73-4e8e-9c27-c222d4c23c76" (UID: "b97fb5e7-2d73-4e8e-9c27-c222d4c23c76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.203123 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151bae23-bc79-469e-a56c-b8f85ca84e7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "151bae23-bc79-469e-a56c-b8f85ca84e7d" (UID: "151bae23-bc79-469e-a56c-b8f85ca84e7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.209607 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-kube-api-access-ndbkv" (OuterVolumeSpecName: "kube-api-access-ndbkv") pod "b97fb5e7-2d73-4e8e-9c27-c222d4c23c76" (UID: "b97fb5e7-2d73-4e8e-9c27-c222d4c23c76"). InnerVolumeSpecName "kube-api-access-ndbkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.209752 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151bae23-bc79-469e-a56c-b8f85ca84e7d-kube-api-access-4plz7" (OuterVolumeSpecName: "kube-api-access-4plz7") pod "151bae23-bc79-469e-a56c-b8f85ca84e7d" (UID: "151bae23-bc79-469e-a56c-b8f85ca84e7d"). InnerVolumeSpecName "kube-api-access-4plz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.303150 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndbkv\" (UniqueName: \"kubernetes.io/projected/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-kube-api-access-ndbkv\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.303178 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bae23-bc79-469e-a56c-b8f85ca84e7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.303192 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4plz7\" (UniqueName: \"kubernetes.io/projected/151bae23-bc79-469e-a56c-b8f85ca84e7d-kube-api-access-4plz7\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.303207 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.783490 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mb94x" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.783485 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mb94x" event={"ID":"b97fb5e7-2d73-4e8e-9c27-c222d4c23c76","Type":"ContainerDied","Data":"07db1cd7cdd371510edec6e5ea5b8beb653afe6195e9d7c5d77d5bec7bbdfbb9"} Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.783618 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07db1cd7cdd371510edec6e5ea5b8beb653afe6195e9d7c5d77d5bec7bbdfbb9" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.785748 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" event={"ID":"151bae23-bc79-469e-a56c-b8f85ca84e7d","Type":"ContainerDied","Data":"3e8d031b1670c4a16908f4d2ec0bad8a6f46dd5d4e99cf39648d5e3a2d718047"} Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.785775 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-7e39-account-create-update-nxcld" Jan 31 07:10:39 crc kubenswrapper[4687]: I0131 07:10:39.785780 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e8d031b1670c4a16908f4d2ec0bad8a6f46dd5d4e99cf39648d5e3a2d718047" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.045591 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-wldts"] Jan 31 07:10:41 crc kubenswrapper[4687]: E0131 07:10:41.046141 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97fb5e7-2d73-4e8e-9c27-c222d4c23c76" containerName="mariadb-database-create" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.046154 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97fb5e7-2d73-4e8e-9c27-c222d4c23c76" containerName="mariadb-database-create" Jan 31 07:10:41 crc kubenswrapper[4687]: E0131 07:10:41.046164 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151bae23-bc79-469e-a56c-b8f85ca84e7d" containerName="mariadb-account-create-update" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.046170 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="151bae23-bc79-469e-a56c-b8f85ca84e7d" containerName="mariadb-account-create-update" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.046310 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="151bae23-bc79-469e-a56c-b8f85ca84e7d" containerName="mariadb-account-create-update" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.046324 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97fb5e7-2d73-4e8e-9c27-c222d4c23c76" containerName="mariadb-database-create" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.046828 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.048495 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-49k95" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.049372 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.053150 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-wldts"] Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.125148 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-config-data\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.125254 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzpc\" (UniqueName: \"kubernetes.io/projected/836a862f-9202-40c4-92ca-8d3167ceab49-kube-api-access-llzpc\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.125343 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-db-sync-config-data\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.226964 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llzpc\" (UniqueName: \"kubernetes.io/projected/836a862f-9202-40c4-92ca-8d3167ceab49-kube-api-access-llzpc\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.227100 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-db-sync-config-data\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.227145 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-config-data\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.232358 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-config-data\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.232837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-db-sync-config-data\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.245667 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llzpc\" (UniqueName: \"kubernetes.io/projected/836a862f-9202-40c4-92ca-8d3167ceab49-kube-api-access-llzpc\") pod \"glance-db-sync-wldts\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.365686 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.652731 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-wldts"] Jan 31 07:10:41 crc kubenswrapper[4687]: W0131 07:10:41.669195 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod836a862f_9202_40c4_92ca_8d3167ceab49.slice/crio-5042972799b5bb655df8ef599ea0e9787ddacc83bf66f059c0644d8846f735e2 WatchSource:0}: Error finding container 5042972799b5bb655df8ef599ea0e9787ddacc83bf66f059c0644d8846f735e2: Status 404 returned error can't find the container with id 5042972799b5bb655df8ef599ea0e9787ddacc83bf66f059c0644d8846f735e2 Jan 31 07:10:41 crc kubenswrapper[4687]: I0131 07:10:41.799855 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-wldts" event={"ID":"836a862f-9202-40c4-92ca-8d3167ceab49","Type":"ContainerStarted","Data":"5042972799b5bb655df8ef599ea0e9787ddacc83bf66f059c0644d8846f735e2"} Jan 31 07:10:42 crc kubenswrapper[4687]: I0131 07:10:42.807090 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-wldts" event={"ID":"836a862f-9202-40c4-92ca-8d3167ceab49","Type":"ContainerStarted","Data":"a5c7f5011504f20993daf2b86806422de15bf7e8535d819064047cd5995791d1"} Jan 31 07:10:42 crc kubenswrapper[4687]: I0131 07:10:42.822998 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-wldts" podStartSLOduration=1.822983082 podStartE2EDuration="1.822983082s" podCreationTimestamp="2026-01-31 07:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:10:42.82143225 +0000 UTC m=+1669.098691835" watchObservedRunningTime="2026-01-31 07:10:42.822983082 +0000 UTC m=+1669.100242657" Jan 31 07:10:45 crc kubenswrapper[4687]: I0131 07:10:45.608320 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:10:45 crc kubenswrapper[4687]: E0131 07:10:45.608801 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:10:45 crc kubenswrapper[4687]: I0131 07:10:45.827376 4687 generic.go:334] "Generic (PLEG): container finished" podID="836a862f-9202-40c4-92ca-8d3167ceab49" containerID="a5c7f5011504f20993daf2b86806422de15bf7e8535d819064047cd5995791d1" exitCode=0 Jan 31 07:10:45 crc kubenswrapper[4687]: I0131 07:10:45.827449 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-wldts" event={"ID":"836a862f-9202-40c4-92ca-8d3167ceab49","Type":"ContainerDied","Data":"a5c7f5011504f20993daf2b86806422de15bf7e8535d819064047cd5995791d1"} Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.126762 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.211232 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-db-sync-config-data\") pod \"836a862f-9202-40c4-92ca-8d3167ceab49\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.211655 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llzpc\" (UniqueName: \"kubernetes.io/projected/836a862f-9202-40c4-92ca-8d3167ceab49-kube-api-access-llzpc\") pod \"836a862f-9202-40c4-92ca-8d3167ceab49\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.211823 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-config-data\") pod \"836a862f-9202-40c4-92ca-8d3167ceab49\" (UID: \"836a862f-9202-40c4-92ca-8d3167ceab49\") " Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.216651 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "836a862f-9202-40c4-92ca-8d3167ceab49" (UID: "836a862f-9202-40c4-92ca-8d3167ceab49"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.223021 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836a862f-9202-40c4-92ca-8d3167ceab49-kube-api-access-llzpc" (OuterVolumeSpecName: "kube-api-access-llzpc") pod "836a862f-9202-40c4-92ca-8d3167ceab49" (UID: "836a862f-9202-40c4-92ca-8d3167ceab49"). InnerVolumeSpecName "kube-api-access-llzpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.246910 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-config-data" (OuterVolumeSpecName: "config-data") pod "836a862f-9202-40c4-92ca-8d3167ceab49" (UID: "836a862f-9202-40c4-92ca-8d3167ceab49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.313268 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llzpc\" (UniqueName: \"kubernetes.io/projected/836a862f-9202-40c4-92ca-8d3167ceab49-kube-api-access-llzpc\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.313645 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.313657 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/836a862f-9202-40c4-92ca-8d3167ceab49-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.846018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-wldts" event={"ID":"836a862f-9202-40c4-92ca-8d3167ceab49","Type":"ContainerDied","Data":"5042972799b5bb655df8ef599ea0e9787ddacc83bf66f059c0644d8846f735e2"} Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.846061 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5042972799b5bb655df8ef599ea0e9787ddacc83bf66f059c0644d8846f735e2" Jan 31 07:10:47 crc kubenswrapper[4687]: I0131 07:10:47.846108 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-wldts" Jan 31 07:10:48 crc kubenswrapper[4687]: I0131 07:10:48.992057 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:10:48 crc kubenswrapper[4687]: E0131 07:10:48.992548 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836a862f-9202-40c4-92ca-8d3167ceab49" containerName="glance-db-sync" Jan 31 07:10:48 crc kubenswrapper[4687]: I0131 07:10:48.992560 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="836a862f-9202-40c4-92ca-8d3167ceab49" containerName="glance-db-sync" Jan 31 07:10:48 crc kubenswrapper[4687]: I0131 07:10:48.992692 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="836a862f-9202-40c4-92ca-8d3167ceab49" containerName="glance-db-sync" Jan 31 07:10:48 crc kubenswrapper[4687]: I0131 07:10:48.993604 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:48 crc kubenswrapper[4687]: I0131 07:10:48.995707 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 07:10:48 crc kubenswrapper[4687]: I0131 07:10:48.995973 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-49k95" Jan 31 07:10:48 crc kubenswrapper[4687]: I0131 07:10:48.997774 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.093836 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140773 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140852 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-run\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-sys\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140902 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140918 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-logs\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140942 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.140985 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.141010 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-dev\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.141200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.141360 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.141392 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.141449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mlsb\" (UniqueName: \"kubernetes.io/projected/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-kube-api-access-6mlsb\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.179544 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.181228 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.183386 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.203232 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242390 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-dev\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242464 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242481 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242496 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mlsb\" (UniqueName: \"kubernetes.io/projected/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-kube-api-access-6mlsb\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242538 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242590 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-run\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242610 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-sys\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242630 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242646 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-logs\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242669 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242713 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.242993 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.243432 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.243491 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-dev\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.248272 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.248788 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.248898 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.250225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.250328 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.250325 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.250373 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-run\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.250427 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-sys\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.250693 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-logs\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.261865 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.270125 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.278744 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mlsb\" (UniqueName: \"kubernetes.io/projected/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-kube-api-access-6mlsb\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.281328 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-0\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.312080 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344172 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-run\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344224 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-dev\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344244 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344285 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh94\" (UniqueName: \"kubernetes.io/projected/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-kube-api-access-vmh94\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344310 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344336 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344388 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-sys\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344451 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344467 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344483 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344516 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.344541 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.445964 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446391 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-sys\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446464 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446482 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446482 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-sys\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446585 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446623 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446668 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447048 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.446659 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447090 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447130 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447245 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-run\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-dev\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447354 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-run\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447393 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh94\" (UniqueName: \"kubernetes.io/projected/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-kube-api-access-vmh94\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447470 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-dev\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447506 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447536 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447638 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.447646 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.460146 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.468857 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.470604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.472737 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh94\" (UniqueName: \"kubernetes.io/projected/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-kube-api-access-vmh94\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.474749 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.496052 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.773116 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:10:49 crc kubenswrapper[4687]: I0131 07:10:49.868778 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"b0d237c4-ca7c-4e37-a6a3-e169266ef83d","Type":"ContainerStarted","Data":"e7032d14c04d9089d038066d8eb1f024f4ce9ac49c9d2190639cfa03a8ef7e66"} Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.026468 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:50 crc kubenswrapper[4687]: W0131 07:10:50.033698 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab36a5e2_c43b_42c5_a9a5_c2e4c9cf71ec.slice/crio-d40eb9649627c075f8266b84f2f7d07b6d2ee7ec6871fa59b11a3f84cb93b1bd WatchSource:0}: Error finding container d40eb9649627c075f8266b84f2f7d07b6d2ee7ec6871fa59b11a3f84cb93b1bd: Status 404 returned error can't find the container with id d40eb9649627c075f8266b84f2f7d07b6d2ee7ec6871fa59b11a3f84cb93b1bd Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.089483 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.889074 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"b0d237c4-ca7c-4e37-a6a3-e169266ef83d","Type":"ContainerStarted","Data":"8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181"} Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.889655 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"b0d237c4-ca7c-4e37-a6a3-e169266ef83d","Type":"ContainerStarted","Data":"4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181"} Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.889673 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"b0d237c4-ca7c-4e37-a6a3-e169266ef83d","Type":"ContainerStarted","Data":"08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c"} Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.891841 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec","Type":"ContainerStarted","Data":"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0"} Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.891878 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec","Type":"ContainerStarted","Data":"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8"} Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.891889 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec","Type":"ContainerStarted","Data":"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0"} Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.891899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec","Type":"ContainerStarted","Data":"d40eb9649627c075f8266b84f2f7d07b6d2ee7ec6871fa59b11a3f84cb93b1bd"} Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.891961 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-log" containerID="cri-o://5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0" gracePeriod=30 Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.891988 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-api" containerID="cri-o://adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0" gracePeriod=30 Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.892043 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-httpd" containerID="cri-o://a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8" gracePeriod=30 Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.965011 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.96499065 podStartE2EDuration="2.96499065s" podCreationTimestamp="2026-01-31 07:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:10:50.940767117 +0000 UTC m=+1677.218026712" watchObservedRunningTime="2026-01-31 07:10:50.96499065 +0000 UTC m=+1677.242250225" Jan 31 07:10:50 crc kubenswrapper[4687]: I0131 07:10:50.965857 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.965847564 podStartE2EDuration="2.965847564s" podCreationTimestamp="2026-01-31 07:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:10:50.962455541 +0000 UTC m=+1677.239715146" watchObservedRunningTime="2026-01-31 07:10:50.965847564 +0000 UTC m=+1677.243107139" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.316613 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.375898 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.375956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.375977 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-sys\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-iscsi\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376055 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmh94\" (UniqueName: \"kubernetes.io/projected/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-kube-api-access-vmh94\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376075 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-run\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376098 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-nvme\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376128 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-var-locks-brick\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376166 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-config-data\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376199 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-httpd-run\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376221 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-dev\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376247 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-logs\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376278 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-scripts\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376481 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-lib-modules\") pod \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\" (UID: \"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec\") " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376766 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376783 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376817 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376798 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.376910 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-dev" (OuterVolumeSpecName: "dev") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.377088 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-run" (OuterVolumeSpecName: "run") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.377194 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-sys" (OuterVolumeSpecName: "sys") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.377562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-logs" (OuterVolumeSpecName: "logs") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.377801 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.382038 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-kube-api-access-vmh94" (OuterVolumeSpecName: "kube-api-access-vmh94") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "kube-api-access-vmh94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.382058 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-scripts" (OuterVolumeSpecName: "scripts") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.382399 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance-cache") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.383095 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.461241 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-config-data" (OuterVolumeSpecName: "config-data") pod "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" (UID: "ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477639 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477683 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477698 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477708 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477717 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmh94\" (UniqueName: \"kubernetes.io/projected/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-kube-api-access-vmh94\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477729 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477737 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477744 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477752 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477762 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477770 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477779 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477787 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.477794 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.491972 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.494331 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.579942 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.579991 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.900581 4687 generic.go:334] "Generic (PLEG): container finished" podID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerID="adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0" exitCode=143 Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.900744 4687 generic.go:334] "Generic (PLEG): container finished" podID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerID="a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8" exitCode=143 Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.900849 4687 generic.go:334] "Generic (PLEG): container finished" podID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerID="5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0" exitCode=143 Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.900657 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.900639 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec","Type":"ContainerDied","Data":"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0"} Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.900969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec","Type":"ContainerDied","Data":"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8"} Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.901001 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec","Type":"ContainerDied","Data":"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0"} Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.901015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec","Type":"ContainerDied","Data":"d40eb9649627c075f8266b84f2f7d07b6d2ee7ec6871fa59b11a3f84cb93b1bd"} Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.901035 4687 scope.go:117] "RemoveContainer" containerID="adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.924540 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.932781 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.941709 4687 scope.go:117] "RemoveContainer" containerID="a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.956307 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:51 crc kubenswrapper[4687]: E0131 07:10:51.956665 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-api" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.956685 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-api" Jan 31 07:10:51 crc kubenswrapper[4687]: E0131 07:10:51.956708 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-httpd" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.956717 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-httpd" Jan 31 07:10:51 crc kubenswrapper[4687]: E0131 07:10:51.956744 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-log" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.956752 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-log" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.956911 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-httpd" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.956928 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-api" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.956938 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" containerName="glance-log" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.958193 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.969583 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.972506 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.980950 4687 scope.go:117] "RemoveContainer" containerID="5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0" Jan 31 07:10:51 crc kubenswrapper[4687]: I0131 07:10:51.999722 4687 scope.go:117] "RemoveContainer" containerID="adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0" Jan 31 07:10:52 crc kubenswrapper[4687]: E0131 07:10:52.000369 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0\": container with ID starting with adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0 not found: ID does not exist" containerID="adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.000425 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0"} err="failed to get container status \"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0\": rpc error: code = NotFound desc = could not find container \"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0\": container with ID starting with adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.000463 4687 scope.go:117] "RemoveContainer" containerID="a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8" Jan 31 07:10:52 crc kubenswrapper[4687]: E0131 07:10:52.001045 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8\": container with ID starting with a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8 not found: ID does not exist" containerID="a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.001078 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8"} err="failed to get container status \"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8\": rpc error: code = NotFound desc = could not find container \"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8\": container with ID starting with a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.001097 4687 scope.go:117] "RemoveContainer" containerID="5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0" Jan 31 07:10:52 crc kubenswrapper[4687]: E0131 07:10:52.001336 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0\": container with ID starting with 5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0 not found: ID does not exist" containerID="5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.001357 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0"} err="failed to get container status \"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0\": rpc error: code = NotFound desc = could not find container \"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0\": container with ID starting with 5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.001373 4687 scope.go:117] "RemoveContainer" containerID="adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.001675 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0"} err="failed to get container status \"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0\": rpc error: code = NotFound desc = could not find container \"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0\": container with ID starting with adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.001694 4687 scope.go:117] "RemoveContainer" containerID="a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.003141 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8"} err="failed to get container status \"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8\": rpc error: code = NotFound desc = could not find container \"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8\": container with ID starting with a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.003171 4687 scope.go:117] "RemoveContainer" containerID="5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.003562 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0"} err="failed to get container status \"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0\": rpc error: code = NotFound desc = could not find container \"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0\": container with ID starting with 5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.003631 4687 scope.go:117] "RemoveContainer" containerID="adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.003952 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0"} err="failed to get container status \"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0\": rpc error: code = NotFound desc = could not find container \"adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0\": container with ID starting with adfc485cd97ed665870fb370f6874f2baa1fdf3e1d5c4e43b2ae2b4be8aa91d0 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.003979 4687 scope.go:117] "RemoveContainer" containerID="a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.004230 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8"} err="failed to get container status \"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8\": rpc error: code = NotFound desc = could not find container \"a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8\": container with ID starting with a150cc0e8cc7f03ccd05623cb049962df6af7c5ba38a695c6a81762614eb40f8 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.004249 4687 scope.go:117] "RemoveContainer" containerID="5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.004773 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0"} err="failed to get container status \"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0\": rpc error: code = NotFound desc = could not find container \"5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0\": container with ID starting with 5553bf862ace3edaa0a71a6a49cf873881ad9b33c11836c2c12b2fe7a6896bc0 not found: ID does not exist" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086579 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086663 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-sys\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086702 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-dev\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086824 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-run\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086853 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-logs\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086879 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086942 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6r6\" (UniqueName: \"kubernetes.io/projected/4909dbe9-535d-4581-b009-7c3cb0856689-kube-api-access-pj6r6\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.086979 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.087012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.087231 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.087364 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.087497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.189132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj6r6\" (UniqueName: \"kubernetes.io/projected/4909dbe9-535d-4581-b009-7c3cb0856689-kube-api-access-pj6r6\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.189461 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.189619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.189752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.189833 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.189935 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.189994 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190241 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190334 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-sys\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190629 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190636 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-dev\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190729 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-run\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190761 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-logs\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190784 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190792 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190628 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-sys\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190945 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.190978 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-run\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.191027 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.191113 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-dev\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.191372 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-logs\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.194648 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.214109 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj6r6\" (UniqueName: \"kubernetes.io/projected/4909dbe9-535d-4581-b009-7c3cb0856689-kube-api-access-pj6r6\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.214440 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.215513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.215688 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.274704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.777107 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:10:52 crc kubenswrapper[4687]: I0131 07:10:52.923849 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"4909dbe9-535d-4581-b009-7c3cb0856689","Type":"ContainerStarted","Data":"667b1b586c9a6458015cef581b08c7e4f2e244d4edacf27345e3671a36716e4e"} Jan 31 07:10:53 crc kubenswrapper[4687]: I0131 07:10:53.612430 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec" path="/var/lib/kubelet/pods/ab36a5e2-c43b-42c5-a9a5-c2e4c9cf71ec/volumes" Jan 31 07:10:53 crc kubenswrapper[4687]: I0131 07:10:53.941151 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"4909dbe9-535d-4581-b009-7c3cb0856689","Type":"ContainerStarted","Data":"d693c07f59777dfd775da580fa6960ae7254e5a7269ee2b547918d2d67994486"} Jan 31 07:10:53 crc kubenswrapper[4687]: I0131 07:10:53.941201 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"4909dbe9-535d-4581-b009-7c3cb0856689","Type":"ContainerStarted","Data":"f5f7121101c453b3ab83aa325d834b19d1cdc4c8d4ed7a789325b6967f0227fb"} Jan 31 07:10:53 crc kubenswrapper[4687]: I0131 07:10:53.941214 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"4909dbe9-535d-4581-b009-7c3cb0856689","Type":"ContainerStarted","Data":"c0959e7fb3e2326733d001d2f02ccb51d0d563072fab87a30110da554f5607e6"} Jan 31 07:10:53 crc kubenswrapper[4687]: I0131 07:10:53.966315 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.9662996010000002 podStartE2EDuration="2.966299601s" podCreationTimestamp="2026-01-31 07:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:10:53.962450715 +0000 UTC m=+1680.239710300" watchObservedRunningTime="2026-01-31 07:10:53.966299601 +0000 UTC m=+1680.243559176" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.312721 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.313320 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.313334 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.337149 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.337214 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.365234 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.985191 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.985708 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.985818 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:10:59 crc kubenswrapper[4687]: I0131 07:10:59.998615 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:00 crc kubenswrapper[4687]: I0131 07:11:00.000054 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:00 crc kubenswrapper[4687]: I0131 07:11:00.001649 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:00 crc kubenswrapper[4687]: I0131 07:11:00.602944 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:11:00 crc kubenswrapper[4687]: E0131 07:11:00.603235 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:11:02 crc kubenswrapper[4687]: I0131 07:11:02.275508 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:02 crc kubenswrapper[4687]: I0131 07:11:02.275582 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:02 crc kubenswrapper[4687]: I0131 07:11:02.275601 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:02 crc kubenswrapper[4687]: I0131 07:11:02.302439 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:02 crc kubenswrapper[4687]: I0131 07:11:02.320816 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:02 crc kubenswrapper[4687]: I0131 07:11:02.324949 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:03 crc kubenswrapper[4687]: I0131 07:11:03.008925 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:03 crc kubenswrapper[4687]: I0131 07:11:03.008994 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:03 crc kubenswrapper[4687]: I0131 07:11:03.009014 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:03 crc kubenswrapper[4687]: I0131 07:11:03.020653 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:03 crc kubenswrapper[4687]: I0131 07:11:03.024462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:03 crc kubenswrapper[4687]: I0131 07:11:03.034981 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.148053 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.150085 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.170903 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.184683 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.193316 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.208465 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.272899 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.274939 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.280728 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.282189 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.296901 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.299827 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.299893 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-logs\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.299919 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-run\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.299947 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-config-data\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.299979 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300003 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300024 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300047 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300114 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300137 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300165 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-sys\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300232 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300255 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-scripts\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300277 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-config-data\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300301 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300350 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-run\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300371 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-sys\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300450 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300477 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk98l\" (UniqueName: \"kubernetes.io/projected/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-kube-api-access-fk98l\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm894\" (UniqueName: \"kubernetes.io/projected/31831763-b71d-44b3-9f9b-37926b40fd8f-kube-api-access-gm894\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300536 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-scripts\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300565 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-dev\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300588 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-dev\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.300610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-logs\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.313713 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-scripts\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402487 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-logs\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402517 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-run\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-config-data\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402567 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402594 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402617 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402662 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402684 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2cmd\" (UniqueName: \"kubernetes.io/projected/0bd6ed0d-323b-48db-a48b-0fca933b8228-kube-api-access-d2cmd\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402832 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402930 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402940 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.402942 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403090 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403109 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-logs\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403127 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403182 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403218 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-logs\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403254 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-dev\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403280 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403330 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403347 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403366 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403388 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-sys\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403437 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403467 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-sys\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403481 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403502 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-dev\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403524 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-scripts\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-scripts\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-config-data\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403627 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-run\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403646 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403662 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-sys\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403679 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5tg\" (UniqueName: \"kubernetes.io/projected/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-kube-api-access-lk5tg\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403702 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-run\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403749 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403765 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403783 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk98l\" (UniqueName: \"kubernetes.io/projected/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-kube-api-access-fk98l\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403798 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-run\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403815 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-config-data\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403835 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm894\" (UniqueName: \"kubernetes.io/projected/31831763-b71d-44b3-9f9b-37926b40fd8f-kube-api-access-gm894\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-scripts\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403877 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-dev\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403902 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-dev\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403930 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-logs\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403959 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-config-data\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403979 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403995 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-logs\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.403994 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404018 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404054 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404052 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404152 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-run\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404261 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-sys\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404265 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404281 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404319 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404621 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404786 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-dev\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.404809 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-dev\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.405117 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-logs\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.405318 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.405362 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-sys\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.406255 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-sys\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.407076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.407276 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-run\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.409527 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-scripts\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.411491 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-config-data\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.418987 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-scripts\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.426026 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-config-data\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.435135 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk98l\" (UniqueName: \"kubernetes.io/projected/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-kube-api-access-fk98l\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.435488 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm894\" (UniqueName: \"kubernetes.io/projected/31831763-b71d-44b3-9f9b-37926b40fd8f-kube-api-access-gm894\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.435740 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.436134 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.438312 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-1\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.439630 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-2\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.504771 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508101 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-config-data\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508148 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-logs\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508195 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-sys\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-scripts\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508255 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508280 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508307 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508330 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2cmd\" (UniqueName: \"kubernetes.io/projected/0bd6ed0d-323b-48db-a48b-0fca933b8228-kube-api-access-d2cmd\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508359 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508381 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508400 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-logs\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508443 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-dev\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508472 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508493 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508514 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508585 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-sys\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-dev\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508658 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-scripts\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.508751 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509130 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-sys\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk5tg\" (UniqueName: \"kubernetes.io/projected/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-kube-api-access-lk5tg\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509241 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-run\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509482 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") device mount path \"/mnt/openstack/pv19\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509512 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-sys\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509538 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509585 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.509984 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510044 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510056 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-logs\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510092 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-run\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510094 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-dev\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510109 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510141 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510153 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") device mount path \"/mnt/openstack/pv20\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510176 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510210 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510176 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510240 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510278 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-run\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-config-data\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510504 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510555 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-run\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510589 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-dev\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.510645 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.525571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-logs\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.533251 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.537610 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.539224 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.541248 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-scripts\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.542216 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-scripts\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.542929 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-config-data\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.543062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk5tg\" (UniqueName: \"kubernetes.io/projected/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-kube-api-access-lk5tg\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.543225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-config-data\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.544285 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2cmd\" (UniqueName: \"kubernetes.io/projected/0bd6ed0d-323b-48db-a48b-0fca933b8228-kube-api-access-d2cmd\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.546948 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-2\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.548924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-1\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.606947 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.616268 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.923560 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:11:05 crc kubenswrapper[4687]: W0131 07:11:05.924154 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb300a606_7b49_4bc8_8aab_6b9e8f55af1c.slice/crio-ca5fb83a943cae86c13e7f3889499d9834b7a9c4692c1c5e1f31da2435859b54 WatchSource:0}: Error finding container ca5fb83a943cae86c13e7f3889499d9834b7a9c4692c1c5e1f31da2435859b54: Status 404 returned error can't find the container with id ca5fb83a943cae86c13e7f3889499d9834b7a9c4692c1c5e1f31da2435859b54 Jan 31 07:11:05 crc kubenswrapper[4687]: I0131 07:11:05.980843 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:11:05 crc kubenswrapper[4687]: W0131 07:11:05.993543 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1ca8e5_537b_499c_8860_5cc5ce8982b0.slice/crio-2d4520e4b783057d19bce4bd8862fa56c276eb5c1efe43340b7c3521655774ac WatchSource:0}: Error finding container 2d4520e4b783057d19bce4bd8862fa56c276eb5c1efe43340b7c3521655774ac: Status 404 returned error can't find the container with id 2d4520e4b783057d19bce4bd8862fa56c276eb5c1efe43340b7c3521655774ac Jan 31 07:11:06 crc kubenswrapper[4687]: I0131 07:11:06.048009 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"4e1ca8e5-537b-499c-8860-5cc5ce8982b0","Type":"ContainerStarted","Data":"2d4520e4b783057d19bce4bd8862fa56c276eb5c1efe43340b7c3521655774ac"} Jan 31 07:11:06 crc kubenswrapper[4687]: I0131 07:11:06.049305 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"b300a606-7b49-4bc8-8aab-6b9e8f55af1c","Type":"ContainerStarted","Data":"ca5fb83a943cae86c13e7f3889499d9834b7a9c4692c1c5e1f31da2435859b54"} Jan 31 07:11:06 crc kubenswrapper[4687]: I0131 07:11:06.061903 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:11:06 crc kubenswrapper[4687]: I0131 07:11:06.179556 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.060862 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"b300a606-7b49-4bc8-8aab-6b9e8f55af1c","Type":"ContainerStarted","Data":"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.062027 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"b300a606-7b49-4bc8-8aab-6b9e8f55af1c","Type":"ContainerStarted","Data":"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.062050 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"b300a606-7b49-4bc8-8aab-6b9e8f55af1c","Type":"ContainerStarted","Data":"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.065689 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"0bd6ed0d-323b-48db-a48b-0fca933b8228","Type":"ContainerStarted","Data":"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.065892 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"0bd6ed0d-323b-48db-a48b-0fca933b8228","Type":"ContainerStarted","Data":"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.066035 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"0bd6ed0d-323b-48db-a48b-0fca933b8228","Type":"ContainerStarted","Data":"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.066196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"0bd6ed0d-323b-48db-a48b-0fca933b8228","Type":"ContainerStarted","Data":"766908ac1d8c36739d6b09aeafc2edb0371d233582c45dfa76eef5bd26ebea37"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.069134 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"31831763-b71d-44b3-9f9b-37926b40fd8f","Type":"ContainerStarted","Data":"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.069222 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"31831763-b71d-44b3-9f9b-37926b40fd8f","Type":"ContainerStarted","Data":"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.069243 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"31831763-b71d-44b3-9f9b-37926b40fd8f","Type":"ContainerStarted","Data":"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.069259 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"31831763-b71d-44b3-9f9b-37926b40fd8f","Type":"ContainerStarted","Data":"7410e0a6ac3f995cb7d0a06b856fb014db54dfefec485703f0c2d5d6a97b18e6"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.071620 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"4e1ca8e5-537b-499c-8860-5cc5ce8982b0","Type":"ContainerStarted","Data":"a237d55f745dd6e353785e65ad9414b363720f5b867877cda8b3be434ae1b1bf"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.071670 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"4e1ca8e5-537b-499c-8860-5cc5ce8982b0","Type":"ContainerStarted","Data":"57389f08cbc6e1856544505ac1bb5c1d40ac7e928e155d02b73e416ba36484a7"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.071692 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"4e1ca8e5-537b-499c-8860-5cc5ce8982b0","Type":"ContainerStarted","Data":"17f99641aaa41afa29755a7a33a189601ef8217901fcc7682105888a5b2ca710"} Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.118141 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-2" podStartSLOduration=3.118117954 podStartE2EDuration="3.118117954s" podCreationTimestamp="2026-01-31 07:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:11:07.101977632 +0000 UTC m=+1693.379237237" watchObservedRunningTime="2026-01-31 07:11:07.118117954 +0000 UTC m=+1693.395377549" Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.145277 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-2" podStartSLOduration=3.145252726 podStartE2EDuration="3.145252726s" podCreationTimestamp="2026-01-31 07:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:11:07.136726313 +0000 UTC m=+1693.413985928" watchObservedRunningTime="2026-01-31 07:11:07.145252726 +0000 UTC m=+1693.422512321" Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.170155 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=3.170130506 podStartE2EDuration="3.170130506s" podCreationTimestamp="2026-01-31 07:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:11:07.16113314 +0000 UTC m=+1693.438392725" watchObservedRunningTime="2026-01-31 07:11:07.170130506 +0000 UTC m=+1693.447390091" Jan 31 07:11:07 crc kubenswrapper[4687]: I0131 07:11:07.194098 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=3.194076191 podStartE2EDuration="3.194076191s" podCreationTimestamp="2026-01-31 07:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:11:07.18894322 +0000 UTC m=+1693.466202805" watchObservedRunningTime="2026-01-31 07:11:07.194076191 +0000 UTC m=+1693.471335766" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.504998 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.505709 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.505727 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.532930 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.533019 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.533436 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.533497 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.533508 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.556610 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.576129 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.583510 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.585921 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.607093 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:11:15 crc kubenswrapper[4687]: E0131 07:11:15.607577 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.625520 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.625609 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.625618 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.625629 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.625639 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.625647 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.639821 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.640218 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.641093 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.645754 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.652373 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:15 crc kubenswrapper[4687]: I0131 07:11:15.657366 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.147521 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148058 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148118 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148137 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148155 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148173 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148383 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148430 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148446 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148458 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148469 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.148479 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.165868 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.166234 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.166633 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.167877 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.168264 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.171858 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.172548 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.173563 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.173708 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.173996 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.177121 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.177578 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.827887 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:11:16 crc kubenswrapper[4687]: I0131 07:11:16.841360 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.040540 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.049872 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.413295 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nrz8z"] Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.414876 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.426131 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrz8z"] Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.507059 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-catalog-content\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.507363 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-utilities\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.507516 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzcr6\" (UniqueName: \"kubernetes.io/projected/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-kube-api-access-nzcr6\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.608321 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzcr6\" (UniqueName: \"kubernetes.io/projected/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-kube-api-access-nzcr6\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.608773 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-catalog-content\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.608915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-utilities\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.609431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-catalog-content\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.609514 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-utilities\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.640831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzcr6\" (UniqueName: \"kubernetes.io/projected/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-kube-api-access-nzcr6\") pod \"redhat-marketplace-nrz8z\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:17 crc kubenswrapper[4687]: I0131 07:11:17.732041 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.160653 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-log" containerID="cri-o://6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.161225 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-log" containerID="cri-o://afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.160775 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-api" containerID="cri-o://06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.160784 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-httpd" containerID="cri-o://836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.161101 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-api" containerID="cri-o://a237d55f745dd6e353785e65ad9414b363720f5b867877cda8b3be434ae1b1bf" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.161338 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-api" containerID="cri-o://2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.160869 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-log" containerID="cri-o://17f99641aaa41afa29755a7a33a189601ef8217901fcc7682105888a5b2ca710" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.161122 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-httpd" containerID="cri-o://57389f08cbc6e1856544505ac1bb5c1d40ac7e928e155d02b73e416ba36484a7" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.161363 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-httpd" containerID="cri-o://3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.161190 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-httpd" containerID="cri-o://0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.161123 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-log" containerID="cri-o://6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.161175 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-api" containerID="cri-o://61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b" gracePeriod=30 Jan 31 07:11:18 crc kubenswrapper[4687]: I0131 07:11:18.243070 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrz8z"] Jan 31 07:11:18 crc kubenswrapper[4687]: E0131 07:11:18.461512 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1ca8e5_537b_499c_8860_5cc5ce8982b0.slice/crio-17f99641aaa41afa29755a7a33a189601ef8217901fcc7682105888a5b2ca710.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31831763_b71d_44b3_9f9b_37926b40fd8f.slice/crio-3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd6ed0d_323b_48db_a48b_0fca933b8228.slice/crio-0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb300a606_7b49_4bc8_8aab_6b9e8f55af1c.slice/crio-836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1ca8e5_537b_499c_8860_5cc5ce8982b0.slice/crio-conmon-57389f08cbc6e1856544505ac1bb5c1d40ac7e928e155d02b73e416ba36484a7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd6ed0d_323b_48db_a48b_0fca933b8228.slice/crio-conmon-0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31831763_b71d_44b3_9f9b_37926b40fd8f.slice/crio-conmon-3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1ca8e5_537b_499c_8860_5cc5ce8982b0.slice/crio-57389f08cbc6e1856544505ac1bb5c1d40ac7e928e155d02b73e416ba36484a7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb300a606_7b49_4bc8_8aab_6b9e8f55af1c.slice/crio-conmon-836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd6ed0d_323b_48db_a48b_0fca933b8228.slice/crio-6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8.scope\": RecentStats: unable to find data in memory cache]" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.066077 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.073904 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.178094 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerID="d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.178247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrz8z" event={"ID":"3b11de24-19a1-4ee8-a0e1-7688c1f743b7","Type":"ContainerDied","Data":"d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.178491 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrz8z" event={"ID":"3b11de24-19a1-4ee8-a0e1-7688c1f743b7","Type":"ContainerStarted","Data":"9bd5351b222a86f48b83837aadbb25b2fdbd8e7368c994e9b67ab1b7c3a75593"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.181093 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.182893 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183003 4687 generic.go:334] "Generic (PLEG): container finished" podID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerID="2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183019 4687 generic.go:334] "Generic (PLEG): container finished" podID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerID="3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183028 4687 generic.go:334] "Generic (PLEG): container finished" podID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerID="afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0" exitCode=143 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183066 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"31831763-b71d-44b3-9f9b-37926b40fd8f","Type":"ContainerDied","Data":"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"31831763-b71d-44b3-9f9b-37926b40fd8f","Type":"ContainerDied","Data":"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183099 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"31831763-b71d-44b3-9f9b-37926b40fd8f","Type":"ContainerDied","Data":"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183111 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"31831763-b71d-44b3-9f9b-37926b40fd8f","Type":"ContainerDied","Data":"7410e0a6ac3f995cb7d0a06b856fb014db54dfefec485703f0c2d5d6a97b18e6"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183129 4687 scope.go:117] "RemoveContainer" containerID="2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.183287 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.186899 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerID="a237d55f745dd6e353785e65ad9414b363720f5b867877cda8b3be434ae1b1bf" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.186921 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerID="57389f08cbc6e1856544505ac1bb5c1d40ac7e928e155d02b73e416ba36484a7" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.186929 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerID="17f99641aaa41afa29755a7a33a189601ef8217901fcc7682105888a5b2ca710" exitCode=143 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.186963 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"4e1ca8e5-537b-499c-8860-5cc5ce8982b0","Type":"ContainerDied","Data":"a237d55f745dd6e353785e65ad9414b363720f5b867877cda8b3be434ae1b1bf"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.186988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"4e1ca8e5-537b-499c-8860-5cc5ce8982b0","Type":"ContainerDied","Data":"57389f08cbc6e1856544505ac1bb5c1d40ac7e928e155d02b73e416ba36484a7"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.186999 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"4e1ca8e5-537b-499c-8860-5cc5ce8982b0","Type":"ContainerDied","Data":"17f99641aaa41afa29755a7a33a189601ef8217901fcc7682105888a5b2ca710"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.189223 4687 generic.go:334] "Generic (PLEG): container finished" podID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerID="06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.189255 4687 generic.go:334] "Generic (PLEG): container finished" podID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerID="836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.189261 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.189267 4687 generic.go:334] "Generic (PLEG): container finished" podID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerID="6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0" exitCode=143 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.189328 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"b300a606-7b49-4bc8-8aab-6b9e8f55af1c","Type":"ContainerDied","Data":"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.189357 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"b300a606-7b49-4bc8-8aab-6b9e8f55af1c","Type":"ContainerDied","Data":"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.189372 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"b300a606-7b49-4bc8-8aab-6b9e8f55af1c","Type":"ContainerDied","Data":"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.189383 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"b300a606-7b49-4bc8-8aab-6b9e8f55af1c","Type":"ContainerDied","Data":"ca5fb83a943cae86c13e7f3889499d9834b7a9c4692c1c5e1f31da2435859b54"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.191752 4687 generic.go:334] "Generic (PLEG): container finished" podID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerID="61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.191845 4687 generic.go:334] "Generic (PLEG): container finished" podID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerID="0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3" exitCode=0 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.191900 4687 generic.go:334] "Generic (PLEG): container finished" podID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerID="6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8" exitCode=143 Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.191963 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"0bd6ed0d-323b-48db-a48b-0fca933b8228","Type":"ContainerDied","Data":"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.192077 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"0bd6ed0d-323b-48db-a48b-0fca933b8228","Type":"ContainerDied","Data":"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.192144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"0bd6ed0d-323b-48db-a48b-0fca933b8228","Type":"ContainerDied","Data":"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8"} Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.192248 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.202254 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.207936 4687 scope.go:117] "RemoveContainer" containerID="3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.230016 4687 scope.go:117] "RemoveContainer" containerID="afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.230771 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk5tg\" (UniqueName: \"kubernetes.io/projected/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-kube-api-access-lk5tg\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.230913 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-logs\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231019 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-httpd-run\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231148 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-scripts\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231251 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-sys\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231346 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-lib-modules\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231485 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231571 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-httpd-run\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231691 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-config-data\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231869 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-dev\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231982 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-var-locks-brick\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231695 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232117 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232115 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231730 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-sys" (OuterVolumeSpecName: "sys") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.231948 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-logs" (OuterVolumeSpecName: "logs") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232076 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232058 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-dev" (OuterVolumeSpecName: "dev") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232460 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-sys\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232673 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-run\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-nvme\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232614 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-sys" (OuterVolumeSpecName: "sys") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232803 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-run" (OuterVolumeSpecName: "run") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.232969 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.233365 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.233734 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-iscsi\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.233822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-config-data\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234097 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-dev\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234215 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-lib-modules\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234350 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-nvme\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234503 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm894\" (UniqueName: \"kubernetes.io/projected/31831763-b71d-44b3-9f9b-37926b40fd8f-kube-api-access-gm894\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234650 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-var-locks-brick\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-dev" (OuterVolumeSpecName: "dev") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234245 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234763 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234840 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-iscsi\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234857 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234887 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234914 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-run\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234933 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-logs\") pod \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\" (UID: \"b300a606-7b49-4bc8-8aab-6b9e8f55af1c\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235003 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-scripts\") pod \"31831763-b71d-44b3-9f9b-37926b40fd8f\" (UID: \"31831763-b71d-44b3-9f9b-37926b40fd8f\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235138 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-run" (OuterVolumeSpecName: "run") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235468 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-logs" (OuterVolumeSpecName: "logs") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235687 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235706 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235714 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235723 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235730 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31831763-b71d-44b3-9f9b-37926b40fd8f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235738 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235746 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235755 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235762 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235770 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235777 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235786 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235793 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235801 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235808 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235815 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31831763-b71d-44b3-9f9b-37926b40fd8f-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.235824 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.234511 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.266679 4687 scope.go:117] "RemoveContainer" containerID="2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.267173 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952\": container with ID starting with 2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952 not found: ID does not exist" containerID="2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.267216 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952"} err="failed to get container status \"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952\": rpc error: code = NotFound desc = could not find container \"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952\": container with ID starting with 2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.267245 4687 scope.go:117] "RemoveContainer" containerID="3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.267553 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9\": container with ID starting with 3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9 not found: ID does not exist" containerID="3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.267589 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9"} err="failed to get container status \"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9\": rpc error: code = NotFound desc = could not find container \"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9\": container with ID starting with 3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.267614 4687 scope.go:117] "RemoveContainer" containerID="afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.267877 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0\": container with ID starting with afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0 not found: ID does not exist" containerID="afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.267917 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0"} err="failed to get container status \"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0\": rpc error: code = NotFound desc = could not find container \"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0\": container with ID starting with afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.267951 4687 scope.go:117] "RemoveContainer" containerID="2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.268260 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952"} err="failed to get container status \"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952\": rpc error: code = NotFound desc = could not find container \"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952\": container with ID starting with 2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.268286 4687 scope.go:117] "RemoveContainer" containerID="3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.268505 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9"} err="failed to get container status \"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9\": rpc error: code = NotFound desc = could not find container \"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9\": container with ID starting with 3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.268537 4687 scope.go:117] "RemoveContainer" containerID="afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.268752 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0"} err="failed to get container status \"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0\": rpc error: code = NotFound desc = could not find container \"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0\": container with ID starting with afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.268778 4687 scope.go:117] "RemoveContainer" containerID="2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.269001 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952"} err="failed to get container status \"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952\": rpc error: code = NotFound desc = could not find container \"2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952\": container with ID starting with 2965e10f2c62f619162d1c1f1174561c7ed5749e9adbf185bacb72c93743b952 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.269023 4687 scope.go:117] "RemoveContainer" containerID="3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.269184 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9"} err="failed to get container status \"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9\": rpc error: code = NotFound desc = could not find container \"3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9\": container with ID starting with 3daa175cb19b3cbb009d748f2e3f08a727b050fe7259f402ceddfe9b70cbb9e9 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.269204 4687 scope.go:117] "RemoveContainer" containerID="afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.269426 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0"} err="failed to get container status \"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0\": rpc error: code = NotFound desc = could not find container \"afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0\": container with ID starting with afc0ec6ae94613c8048c1ee7a8368fa19525ec7c847de0e2b284bfb6d191dbb0 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.269453 4687 scope.go:117] "RemoveContainer" containerID="06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.273046 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-scripts" (OuterVolumeSpecName: "scripts") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.273047 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-scripts" (OuterVolumeSpecName: "scripts") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.273087 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage19-crc" (OuterVolumeSpecName: "glance") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "local-storage19-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.273193 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31831763-b71d-44b3-9f9b-37926b40fd8f-kube-api-access-gm894" (OuterVolumeSpecName: "kube-api-access-gm894") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "kube-api-access-gm894". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.273288 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-kube-api-access-lk5tg" (OuterVolumeSpecName: "kube-api-access-lk5tg") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "kube-api-access-lk5tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.274508 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance-cache") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.289566 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.289975 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.321706 4687 scope.go:117] "RemoveContainer" containerID="836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336577 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-config-data\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336609 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-lib-modules\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336637 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-httpd-run\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336671 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-scripts\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336688 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2cmd\" (UniqueName: \"kubernetes.io/projected/0bd6ed0d-323b-48db-a48b-0fca933b8228-kube-api-access-d2cmd\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336706 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-lib-modules\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336730 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk98l\" (UniqueName: \"kubernetes.io/projected/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-kube-api-access-fk98l\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336780 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-iscsi\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336797 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-var-locks-brick\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336811 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336827 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-var-locks-brick\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336849 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-dev\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336877 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336906 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-nvme\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336932 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-sys\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336961 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-run\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336984 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-iscsi\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.336998 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-nvme\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337019 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-scripts\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337036 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337062 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-sys\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337080 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-run\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337097 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-httpd-run\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337114 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-config-data\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337138 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-logs\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337152 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-dev\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337179 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\" (UID: \"4e1ca8e5-537b-499c-8860-5cc5ce8982b0\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337195 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-logs\") pod \"0bd6ed0d-323b-48db-a48b-0fca933b8228\" (UID: \"0bd6ed0d-323b-48db-a48b-0fca933b8228\") " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337502 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337514 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337524 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm894\" (UniqueName: \"kubernetes.io/projected/31831763-b71d-44b3-9f9b-37926b40fd8f-kube-api-access-gm894\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337536 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337548 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337557 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337565 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk5tg\" (UniqueName: \"kubernetes.io/projected/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-kube-api-access-lk5tg\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337573 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.337585 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") on node \"crc\" " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.338558 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-run" (OuterVolumeSpecName: "run") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.350715 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-sys" (OuterVolumeSpecName: "sys") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.350835 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-run" (OuterVolumeSpecName: "run") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.351141 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.351427 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.351462 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.351483 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-dev" (OuterVolumeSpecName: "dev") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.351867 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-logs" (OuterVolumeSpecName: "logs") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.351905 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.351895 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-logs" (OuterVolumeSpecName: "logs") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.351933 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.352232 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.353480 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.354299 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.371611 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.371681 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-sys" (OuterVolumeSpecName: "sys") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.371709 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.371738 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-dev" (OuterVolumeSpecName: "dev") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.384382 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage15-crc" (OuterVolumeSpecName: "glance") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "local-storage15-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.385576 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage20-crc" (OuterVolumeSpecName: "glance-cache") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "local-storage20-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.385665 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-kube-api-access-fk98l" (OuterVolumeSpecName: "kube-api-access-fk98l") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "kube-api-access-fk98l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.391656 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.391788 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance-cache") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.397677 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-scripts" (OuterVolumeSpecName: "scripts") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.400921 4687 scope.go:117] "RemoveContainer" containerID="6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.428608 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage19-crc" (UniqueName: "kubernetes.io/local-volume/local-storage19-crc") on node "crc" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.439617 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.439617 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440654 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk98l\" (UniqueName: \"kubernetes.io/projected/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-kube-api-access-fk98l\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440670 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440681 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440706 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440727 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440736 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440747 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440759 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440767 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440775 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440783 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440790 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440798 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440805 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440818 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440827 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440835 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440843 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440851 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440858 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440870 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440879 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440887 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440894 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bd6ed0d-323b-48db-a48b-0fca933b8228-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440903 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440911 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.440919 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd6ed0d-323b-48db-a48b-0fca933b8228-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.454529 4687 scope.go:117] "RemoveContainer" containerID="06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.458565 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-scripts" (OuterVolumeSpecName: "scripts") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.459646 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd6ed0d-323b-48db-a48b-0fca933b8228-kube-api-access-d2cmd" (OuterVolumeSpecName: "kube-api-access-d2cmd") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "kube-api-access-d2cmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.459845 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80\": container with ID starting with 06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80 not found: ID does not exist" containerID="06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.459927 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80"} err="failed to get container status \"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80\": rpc error: code = NotFound desc = could not find container \"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80\": container with ID starting with 06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.459958 4687 scope.go:117] "RemoveContainer" containerID="836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.461286 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6\": container with ID starting with 836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6 not found: ID does not exist" containerID="836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.461328 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6"} err="failed to get container status \"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6\": rpc error: code = NotFound desc = could not find container \"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6\": container with ID starting with 836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.461361 4687 scope.go:117] "RemoveContainer" containerID="6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.461778 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0\": container with ID starting with 6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0 not found: ID does not exist" containerID="6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.461884 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0"} err="failed to get container status \"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0\": rpc error: code = NotFound desc = could not find container \"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0\": container with ID starting with 6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.461952 4687 scope.go:117] "RemoveContainer" containerID="06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.462262 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80"} err="failed to get container status \"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80\": rpc error: code = NotFound desc = could not find container \"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80\": container with ID starting with 06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.462287 4687 scope.go:117] "RemoveContainer" containerID="836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.462640 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6"} err="failed to get container status \"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6\": rpc error: code = NotFound desc = could not find container \"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6\": container with ID starting with 836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.462741 4687 scope.go:117] "RemoveContainer" containerID="6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.463014 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0"} err="failed to get container status \"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0\": rpc error: code = NotFound desc = could not find container \"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0\": container with ID starting with 6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.463038 4687 scope.go:117] "RemoveContainer" containerID="06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.463224 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80"} err="failed to get container status \"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80\": rpc error: code = NotFound desc = could not find container \"06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80\": container with ID starting with 06574dfa5f1d942a8ed7b3f0cd1c51d4b364042af9ac3eb60bcd5cd249297a80 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.463237 4687 scope.go:117] "RemoveContainer" containerID="836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.463559 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6"} err="failed to get container status \"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6\": rpc error: code = NotFound desc = could not find container \"836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6\": container with ID starting with 836b57698f6f728150b5ae7a3ff8dad341e89a0912b31654071243a5948915c6 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.463581 4687 scope.go:117] "RemoveContainer" containerID="6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.463829 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0"} err="failed to get container status \"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0\": rpc error: code = NotFound desc = could not find container \"6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0\": container with ID starting with 6bfa1a866da3f4b7ba57cc44e96121e18adc0c7d7ac0aa1fdda9b4006a8214e0 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.463853 4687 scope.go:117] "RemoveContainer" containerID="61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.468351 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage15-crc" (UniqueName: "kubernetes.io/local-volume/local-storage15-crc") on node "crc" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.484087 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.484429 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.490094 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-config-data" (OuterVolumeSpecName: "config-data") pod "b300a606-7b49-4bc8-8aab-6b9e8f55af1c" (UID: "b300a606-7b49-4bc8-8aab-6b9e8f55af1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.495984 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.501286 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage20-crc" (UniqueName: "kubernetes.io/local-volume/local-storage20-crc") on node "crc" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.501676 4687 scope.go:117] "RemoveContainer" containerID="0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.509165 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-config-data" (OuterVolumeSpecName: "config-data") pod "31831763-b71d-44b3-9f9b-37926b40fd8f" (UID: "31831763-b71d-44b3-9f9b-37926b40fd8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.518638 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-config-data" (OuterVolumeSpecName: "config-data") pod "4e1ca8e5-537b-499c-8860-5cc5ce8982b0" (UID: "4e1ca8e5-537b-499c-8860-5cc5ce8982b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.529163 4687 scope.go:117] "RemoveContainer" containerID="6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542501 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542525 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542534 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542542 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1ca8e5-537b-499c-8860-5cc5ce8982b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542550 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542558 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2cmd\" (UniqueName: \"kubernetes.io/projected/0bd6ed0d-323b-48db-a48b-0fca933b8228-kube-api-access-d2cmd\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542569 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31831763-b71d-44b3-9f9b-37926b40fd8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542576 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542584 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.542593 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b300a606-7b49-4bc8-8aab-6b9e8f55af1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.545097 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-config-data" (OuterVolumeSpecName: "config-data") pod "0bd6ed0d-323b-48db-a48b-0fca933b8228" (UID: "0bd6ed0d-323b-48db-a48b-0fca933b8228"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.551722 4687 scope.go:117] "RemoveContainer" containerID="61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.552233 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b\": container with ID starting with 61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b not found: ID does not exist" containerID="61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.552267 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b"} err="failed to get container status \"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b\": rpc error: code = NotFound desc = could not find container \"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b\": container with ID starting with 61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.552292 4687 scope.go:117] "RemoveContainer" containerID="0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.552940 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3\": container with ID starting with 0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3 not found: ID does not exist" containerID="0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.552970 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3"} err="failed to get container status \"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3\": rpc error: code = NotFound desc = could not find container \"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3\": container with ID starting with 0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.552988 4687 scope.go:117] "RemoveContainer" containerID="6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8" Jan 31 07:11:19 crc kubenswrapper[4687]: E0131 07:11:19.553387 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8\": container with ID starting with 6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8 not found: ID does not exist" containerID="6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.553459 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8"} err="failed to get container status \"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8\": rpc error: code = NotFound desc = could not find container \"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8\": container with ID starting with 6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.553476 4687 scope.go:117] "RemoveContainer" containerID="61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.553697 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b"} err="failed to get container status \"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b\": rpc error: code = NotFound desc = could not find container \"61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b\": container with ID starting with 61f81de5cf3c171eb4cb7f5ad0f987b94fa6cdc97b67fc656cb6fbe4e9cd155b not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.553715 4687 scope.go:117] "RemoveContainer" containerID="0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.553963 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3"} err="failed to get container status \"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3\": rpc error: code = NotFound desc = could not find container \"0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3\": container with ID starting with 0a22a62541274ecbe90a64d46285cd5068c2358d0ad782a559841334f92658f3 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.553981 4687 scope.go:117] "RemoveContainer" containerID="6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.554299 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8"} err="failed to get container status \"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8\": rpc error: code = NotFound desc = could not find container \"6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8\": container with ID starting with 6d2c1a56726e64e2c4883f50b00b36da253e8412b58282e6193d9798ae5be9a8 not found: ID does not exist" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.644404 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd6ed0d-323b-48db-a48b-0fca933b8228-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.810474 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.817708 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.836385 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.878693 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.888971 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:11:19 crc kubenswrapper[4687]: I0131 07:11:19.894912 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.203574 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerID="9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5" exitCode=0 Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.203690 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrz8z" event={"ID":"3b11de24-19a1-4ee8-a0e1-7688c1f743b7","Type":"ContainerDied","Data":"9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5"} Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.207132 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"4e1ca8e5-537b-499c-8860-5cc5ce8982b0","Type":"ContainerDied","Data":"2d4520e4b783057d19bce4bd8862fa56c276eb5c1efe43340b7c3521655774ac"} Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.207180 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.207188 4687 scope.go:117] "RemoveContainer" containerID="a237d55f745dd6e353785e65ad9414b363720f5b867877cda8b3be434ae1b1bf" Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.233899 4687 scope.go:117] "RemoveContainer" containerID="57389f08cbc6e1856544505ac1bb5c1d40ac7e928e155d02b73e416ba36484a7" Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.240990 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.247694 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.257355 4687 scope.go:117] "RemoveContainer" containerID="17f99641aaa41afa29755a7a33a189601ef8217901fcc7682105888a5b2ca710" Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.969462 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.969987 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-log" containerID="cri-o://08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c" gracePeriod=30 Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.970101 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-httpd" containerID="cri-o://4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181" gracePeriod=30 Jan 31 07:11:20 crc kubenswrapper[4687]: I0131 07:11:20.970105 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-api" containerID="cri-o://8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181" gracePeriod=30 Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.221936 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrz8z" event={"ID":"3b11de24-19a1-4ee8-a0e1-7688c1f743b7","Type":"ContainerStarted","Data":"34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39"} Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.224907 4687 generic.go:334] "Generic (PLEG): container finished" podID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerID="4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181" exitCode=0 Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.224945 4687 generic.go:334] "Generic (PLEG): container finished" podID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerID="08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c" exitCode=143 Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.224938 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"b0d237c4-ca7c-4e37-a6a3-e169266ef83d","Type":"ContainerDied","Data":"4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181"} Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.224988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"b0d237c4-ca7c-4e37-a6a3-e169266ef83d","Type":"ContainerDied","Data":"08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c"} Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.241424 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nrz8z" podStartSLOduration=2.828831498 podStartE2EDuration="4.241385193s" podCreationTimestamp="2026-01-31 07:11:17 +0000 UTC" firstStartedPulling="2026-01-31 07:11:19.180810058 +0000 UTC m=+1705.458069633" lastFinishedPulling="2026-01-31 07:11:20.593363743 +0000 UTC m=+1706.870623328" observedRunningTime="2026-01-31 07:11:21.239024299 +0000 UTC m=+1707.516283884" watchObservedRunningTime="2026-01-31 07:11:21.241385193 +0000 UTC m=+1707.518644768" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.587727 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.588158 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-log" containerID="cri-o://c0959e7fb3e2326733d001d2f02ccb51d0d563072fab87a30110da554f5607e6" gracePeriod=30 Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.588252 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-httpd" containerID="cri-o://f5f7121101c453b3ab83aa325d834b19d1cdc4c8d4ed7a789325b6967f0227fb" gracePeriod=30 Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.588323 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-api" containerID="cri-o://d693c07f59777dfd775da580fa6960ae7254e5a7269ee2b547918d2d67994486" gracePeriod=30 Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.622966 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" path="/var/lib/kubelet/pods/0bd6ed0d-323b-48db-a48b-0fca933b8228/volumes" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.623971 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" path="/var/lib/kubelet/pods/31831763-b71d-44b3-9f9b-37926b40fd8f/volumes" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.625372 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" path="/var/lib/kubelet/pods/4e1ca8e5-537b-499c-8860-5cc5ce8982b0/volumes" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.626051 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" path="/var/lib/kubelet/pods/b300a606-7b49-4bc8-8aab-6b9e8f55af1c/volumes" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.779393 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880304 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-run\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880372 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880435 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-httpd-run\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880475 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-nvme\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880513 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880536 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-scripts\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880574 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880604 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-sys\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880628 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-dev\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880647 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-iscsi\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880658 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-sys" (OuterVolumeSpecName: "sys") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880664 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mlsb\" (UniqueName: \"kubernetes.io/projected/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-kube-api-access-6mlsb\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880685 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-lib-modules\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880709 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-config-data\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880714 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-dev" (OuterVolumeSpecName: "dev") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880743 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-logs\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.880777 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-var-locks-brick\") pod \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\" (UID: \"b0d237c4-ca7c-4e37-a6a3-e169266ef83d\") " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881041 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881054 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881063 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881095 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881124 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881119 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881243 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-run" (OuterVolumeSpecName: "run") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881332 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.881475 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-logs" (OuterVolumeSpecName: "logs") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.887634 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.887642 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.887635 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-scripts" (OuterVolumeSpecName: "scripts") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.888284 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-kube-api-access-6mlsb" (OuterVolumeSpecName: "kube-api-access-6mlsb") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "kube-api-access-6mlsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.953194 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-config-data" (OuterVolumeSpecName: "config-data") pod "b0d237c4-ca7c-4e37-a6a3-e169266ef83d" (UID: "b0d237c4-ca7c-4e37-a6a3-e169266ef83d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982303 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982362 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982380 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982391 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982424 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982505 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982530 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982553 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982570 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982586 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:21 crc kubenswrapper[4687]: I0131 07:11:21.982605 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mlsb\" (UniqueName: \"kubernetes.io/projected/b0d237c4-ca7c-4e37-a6a3-e169266ef83d-kube-api-access-6mlsb\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.001846 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.002247 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.084008 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.084042 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.237250 4687 generic.go:334] "Generic (PLEG): container finished" podID="4909dbe9-535d-4581-b009-7c3cb0856689" containerID="d693c07f59777dfd775da580fa6960ae7254e5a7269ee2b547918d2d67994486" exitCode=0 Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.237285 4687 generic.go:334] "Generic (PLEG): container finished" podID="4909dbe9-535d-4581-b009-7c3cb0856689" containerID="f5f7121101c453b3ab83aa325d834b19d1cdc4c8d4ed7a789325b6967f0227fb" exitCode=0 Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.237295 4687 generic.go:334] "Generic (PLEG): container finished" podID="4909dbe9-535d-4581-b009-7c3cb0856689" containerID="c0959e7fb3e2326733d001d2f02ccb51d0d563072fab87a30110da554f5607e6" exitCode=143 Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.237344 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"4909dbe9-535d-4581-b009-7c3cb0856689","Type":"ContainerDied","Data":"d693c07f59777dfd775da580fa6960ae7254e5a7269ee2b547918d2d67994486"} Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.237378 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"4909dbe9-535d-4581-b009-7c3cb0856689","Type":"ContainerDied","Data":"f5f7121101c453b3ab83aa325d834b19d1cdc4c8d4ed7a789325b6967f0227fb"} Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.237401 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"4909dbe9-535d-4581-b009-7c3cb0856689","Type":"ContainerDied","Data":"c0959e7fb3e2326733d001d2f02ccb51d0d563072fab87a30110da554f5607e6"} Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.240496 4687 generic.go:334] "Generic (PLEG): container finished" podID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerID="8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181" exitCode=0 Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.241104 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.241278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"b0d237c4-ca7c-4e37-a6a3-e169266ef83d","Type":"ContainerDied","Data":"8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181"} Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.241310 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"b0d237c4-ca7c-4e37-a6a3-e169266ef83d","Type":"ContainerDied","Data":"e7032d14c04d9089d038066d8eb1f024f4ce9ac49c9d2190639cfa03a8ef7e66"} Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.241333 4687 scope.go:117] "RemoveContainer" containerID="8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.267554 4687 scope.go:117] "RemoveContainer" containerID="4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.287224 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.293845 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.301231 4687 scope.go:117] "RemoveContainer" containerID="08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.369963 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.371515 4687 scope.go:117] "RemoveContainer" containerID="8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181" Jan 31 07:11:22 crc kubenswrapper[4687]: E0131 07:11:22.371854 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181\": container with ID starting with 8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181 not found: ID does not exist" containerID="8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.371891 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181"} err="failed to get container status \"8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181\": rpc error: code = NotFound desc = could not find container \"8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181\": container with ID starting with 8e3d948bb248cba3e65ecd22cf3eb76e7b5ba393187ea621b6372675ab176181 not found: ID does not exist" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.371934 4687 scope.go:117] "RemoveContainer" containerID="4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181" Jan 31 07:11:22 crc kubenswrapper[4687]: E0131 07:11:22.372196 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181\": container with ID starting with 4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181 not found: ID does not exist" containerID="4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.372226 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181"} err="failed to get container status \"4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181\": rpc error: code = NotFound desc = could not find container \"4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181\": container with ID starting with 4c3e8b7d15b7ecdbf797bbe95423fed0e201799c4016278961f9cc6d6ad18181 not found: ID does not exist" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.372246 4687 scope.go:117] "RemoveContainer" containerID="08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c" Jan 31 07:11:22 crc kubenswrapper[4687]: E0131 07:11:22.372504 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c\": container with ID starting with 08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c not found: ID does not exist" containerID="08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.372530 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c"} err="failed to get container status \"08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c\": rpc error: code = NotFound desc = could not find container \"08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c\": container with ID starting with 08ca1fe50d68d79ee615191136361c0d5f438478e8abed34dbef305b2746334c not found: ID does not exist" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489153 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-lib-modules\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489230 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489255 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-httpd-run\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489268 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489282 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-run\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489339 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-run" (OuterVolumeSpecName: "run") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489363 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-scripts\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489389 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-nvme\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489521 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-dev\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489568 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489581 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489636 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-iscsi\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489669 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-sys\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489678 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489681 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-dev" (OuterVolumeSpecName: "dev") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489741 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-config-data\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489759 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-sys" (OuterVolumeSpecName: "sys") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489769 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-var-locks-brick\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489775 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489793 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489808 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj6r6\" (UniqueName: \"kubernetes.io/projected/4909dbe9-535d-4581-b009-7c3cb0856689-kube-api-access-pj6r6\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.489926 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-logs\") pod \"4909dbe9-535d-4581-b009-7c3cb0856689\" (UID: \"4909dbe9-535d-4581-b009-7c3cb0856689\") " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490266 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-logs" (OuterVolumeSpecName: "logs") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490550 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490899 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490913 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490924 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490936 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490947 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490960 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490970 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4909dbe9-535d-4581-b009-7c3cb0856689-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.490981 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4909dbe9-535d-4581-b009-7c3cb0856689-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.493610 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-scripts" (OuterVolumeSpecName: "scripts") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.493693 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.493900 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4909dbe9-535d-4581-b009-7c3cb0856689-kube-api-access-pj6r6" (OuterVolumeSpecName: "kube-api-access-pj6r6") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "kube-api-access-pj6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.496026 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance-cache") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.555156 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-config-data" (OuterVolumeSpecName: "config-data") pod "4909dbe9-535d-4581-b009-7c3cb0856689" (UID: "4909dbe9-535d-4581-b009-7c3cb0856689"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.593062 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.593095 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.593110 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.593119 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909dbe9-535d-4581-b009-7c3cb0856689-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.593127 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj6r6\" (UniqueName: \"kubernetes.io/projected/4909dbe9-535d-4581-b009-7c3cb0856689-kube-api-access-pj6r6\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.614596 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.624525 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.694890 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:22 crc kubenswrapper[4687]: I0131 07:11:22.694933 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.250880 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"4909dbe9-535d-4581-b009-7c3cb0856689","Type":"ContainerDied","Data":"667b1b586c9a6458015cef581b08c7e4f2e244d4edacf27345e3671a36716e4e"} Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.250945 4687 scope.go:117] "RemoveContainer" containerID="d693c07f59777dfd775da580fa6960ae7254e5a7269ee2b547918d2d67994486" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.251107 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.286195 4687 scope.go:117] "RemoveContainer" containerID="f5f7121101c453b3ab83aa325d834b19d1cdc4c8d4ed7a789325b6967f0227fb" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.287336 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.294423 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.310466 4687 scope.go:117] "RemoveContainer" containerID="c0959e7fb3e2326733d001d2f02ccb51d0d563072fab87a30110da554f5607e6" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.613119 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" path="/var/lib/kubelet/pods/4909dbe9-535d-4581-b009-7c3cb0856689/volumes" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.613988 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" path="/var/lib/kubelet/pods/b0d237c4-ca7c-4e37-a6a3-e169266ef83d/volumes" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.952762 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ch7fd"] Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953019 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953029 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953047 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953053 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953066 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953071 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953082 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953087 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953102 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953109 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953119 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953124 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953138 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953144 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953157 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953162 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953177 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953182 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953191 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953197 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953206 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953213 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953224 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953230 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953239 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953245 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953254 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953260 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953266 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953272 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953296 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953302 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953311 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953316 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: E0131 07:11:23.953325 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953330 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953485 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953497 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953506 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953515 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953522 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953530 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953539 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953547 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953554 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd6ed0d-323b-48db-a48b-0fca933b8228" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953563 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953570 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="31831763-b71d-44b3-9f9b-37926b40fd8f" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953579 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b300a606-7b49-4bc8-8aab-6b9e8f55af1c" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953588 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953597 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1ca8e5-537b-499c-8860-5cc5ce8982b0" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953603 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-log" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953609 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-httpd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953615 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.953624 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d237c4-ca7c-4e37-a6a3-e169266ef83d" containerName="glance-api" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.954580 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:23 crc kubenswrapper[4687]: I0131 07:11:23.965384 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ch7fd"] Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.014332 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfbn9\" (UniqueName: \"kubernetes.io/projected/f6a903e2-5caa-405b-a762-5f619e51cd76-kube-api-access-mfbn9\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.014573 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-utilities\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.014763 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-catalog-content\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.116453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-utilities\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.116531 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-catalog-content\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.116584 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfbn9\" (UniqueName: \"kubernetes.io/projected/f6a903e2-5caa-405b-a762-5f619e51cd76-kube-api-access-mfbn9\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.117051 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-utilities\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.117080 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-catalog-content\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.143355 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfbn9\" (UniqueName: \"kubernetes.io/projected/f6a903e2-5caa-405b-a762-5f619e51cd76-kube-api-access-mfbn9\") pod \"community-operators-ch7fd\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.273609 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.441883 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-wldts"] Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.449366 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-wldts"] Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.457019 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance7e39-account-delete-gphkj"] Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.457984 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.469390 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance7e39-account-delete-gphkj"] Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.528309 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6q6h\" (UniqueName: \"kubernetes.io/projected/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-kube-api-access-l6q6h\") pod \"glance7e39-account-delete-gphkj\" (UID: \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\") " pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.528367 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-operator-scripts\") pod \"glance7e39-account-delete-gphkj\" (UID: \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\") " pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.629868 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6q6h\" (UniqueName: \"kubernetes.io/projected/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-kube-api-access-l6q6h\") pod \"glance7e39-account-delete-gphkj\" (UID: \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\") " pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.630256 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-operator-scripts\") pod \"glance7e39-account-delete-gphkj\" (UID: \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\") " pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.631257 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-operator-scripts\") pod \"glance7e39-account-delete-gphkj\" (UID: \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\") " pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.676197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6q6h\" (UniqueName: \"kubernetes.io/projected/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-kube-api-access-l6q6h\") pod \"glance7e39-account-delete-gphkj\" (UID: \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\") " pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.702836 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ch7fd"] Jan 31 07:11:24 crc kubenswrapper[4687]: I0131 07:11:24.790865 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:25 crc kubenswrapper[4687]: I0131 07:11:25.248931 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance7e39-account-delete-gphkj"] Jan 31 07:11:25 crc kubenswrapper[4687]: I0131 07:11:25.276016 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" event={"ID":"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2","Type":"ContainerStarted","Data":"a031b221737c07d71804c325cb4dc794b0b177f328998bf813b447e0c6ee2e07"} Jan 31 07:11:25 crc kubenswrapper[4687]: I0131 07:11:25.278091 4687 generic.go:334] "Generic (PLEG): container finished" podID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerID="d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e" exitCode=0 Jan 31 07:11:25 crc kubenswrapper[4687]: I0131 07:11:25.278140 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7fd" event={"ID":"f6a903e2-5caa-405b-a762-5f619e51cd76","Type":"ContainerDied","Data":"d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e"} Jan 31 07:11:25 crc kubenswrapper[4687]: I0131 07:11:25.278200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7fd" event={"ID":"f6a903e2-5caa-405b-a762-5f619e51cd76","Type":"ContainerStarted","Data":"29e2ae6f32dd36067c4627aef2c4f1fbaab1e696d27c4a5af7c709989eac4332"} Jan 31 07:11:25 crc kubenswrapper[4687]: I0131 07:11:25.615992 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836a862f-9202-40c4-92ca-8d3167ceab49" path="/var/lib/kubelet/pods/836a862f-9202-40c4-92ca-8d3167ceab49/volumes" Jan 31 07:11:26 crc kubenswrapper[4687]: I0131 07:11:26.305164 4687 generic.go:334] "Generic (PLEG): container finished" podID="e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2" containerID="50195bf1daec8e92669fa77bbca3ded2dce68fbab162dab6d3681104455abf51" exitCode=0 Jan 31 07:11:26 crc kubenswrapper[4687]: I0131 07:11:26.305289 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" event={"ID":"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2","Type":"ContainerDied","Data":"50195bf1daec8e92669fa77bbca3ded2dce68fbab162dab6d3681104455abf51"} Jan 31 07:11:26 crc kubenswrapper[4687]: I0131 07:11:26.308356 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7fd" event={"ID":"f6a903e2-5caa-405b-a762-5f619e51cd76","Type":"ContainerStarted","Data":"89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139"} Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.318827 4687 generic.go:334] "Generic (PLEG): container finished" podID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerID="89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139" exitCode=0 Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.318927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7fd" event={"ID":"f6a903e2-5caa-405b-a762-5f619e51cd76","Type":"ContainerDied","Data":"89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139"} Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.624818 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.672314 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6q6h\" (UniqueName: \"kubernetes.io/projected/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-kube-api-access-l6q6h\") pod \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\" (UID: \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\") " Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.672478 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-operator-scripts\") pod \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\" (UID: \"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2\") " Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.675284 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2" (UID: "e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.678574 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-kube-api-access-l6q6h" (OuterVolumeSpecName: "kube-api-access-l6q6h") pod "e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2" (UID: "e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2"). InnerVolumeSpecName "kube-api-access-l6q6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.732159 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.732536 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.774701 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6q6h\" (UniqueName: \"kubernetes.io/projected/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-kube-api-access-l6q6h\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.774731 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:27 crc kubenswrapper[4687]: I0131 07:11:27.779455 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:28 crc kubenswrapper[4687]: I0131 07:11:28.328085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7fd" event={"ID":"f6a903e2-5caa-405b-a762-5f619e51cd76","Type":"ContainerStarted","Data":"e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b"} Jan 31 07:11:28 crc kubenswrapper[4687]: I0131 07:11:28.329267 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" Jan 31 07:11:28 crc kubenswrapper[4687]: I0131 07:11:28.329310 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance7e39-account-delete-gphkj" event={"ID":"e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2","Type":"ContainerDied","Data":"a031b221737c07d71804c325cb4dc794b0b177f328998bf813b447e0c6ee2e07"} Jan 31 07:11:28 crc kubenswrapper[4687]: I0131 07:11:28.329355 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a031b221737c07d71804c325cb4dc794b0b177f328998bf813b447e0c6ee2e07" Jan 31 07:11:28 crc kubenswrapper[4687]: I0131 07:11:28.350238 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ch7fd" podStartSLOduration=2.898258675 podStartE2EDuration="5.350221613s" podCreationTimestamp="2026-01-31 07:11:23 +0000 UTC" firstStartedPulling="2026-01-31 07:11:25.28006104 +0000 UTC m=+1711.557320615" lastFinishedPulling="2026-01-31 07:11:27.732023968 +0000 UTC m=+1714.009283553" observedRunningTime="2026-01-31 07:11:28.346655115 +0000 UTC m=+1714.623914680" watchObservedRunningTime="2026-01-31 07:11:28.350221613 +0000 UTC m=+1714.627481188" Jan 31 07:11:28 crc kubenswrapper[4687]: I0131 07:11:28.371845 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.517753 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-7e39-account-create-update-nxcld"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.524373 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance7e39-account-delete-gphkj"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.530773 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-mb94x"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.536921 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-7e39-account-create-update-nxcld"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.549265 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance7e39-account-delete-gphkj"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.559391 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-mb94x"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.602755 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:11:29 crc kubenswrapper[4687]: E0131 07:11:29.602949 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.611259 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151bae23-bc79-469e-a56c-b8f85ca84e7d" path="/var/lib/kubelet/pods/151bae23-bc79-469e-a56c-b8f85ca84e7d/volumes" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.612146 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97fb5e7-2d73-4e8e-9c27-c222d4c23c76" path="/var/lib/kubelet/pods/b97fb5e7-2d73-4e8e-9c27-c222d4c23c76/volumes" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.612795 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2" path="/var/lib/kubelet/pods/e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2/volumes" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.761328 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-pjkn6"] Jan 31 07:11:29 crc kubenswrapper[4687]: E0131 07:11:29.761629 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2" containerName="mariadb-account-delete" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.761643 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2" containerName="mariadb-account-delete" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.761793 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97dfed1-e8d8-40fd-8b55-32d6aee2c6e2" containerName="mariadb-account-delete" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.762222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.770708 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-pjkn6"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.805237 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-operator-scripts\") pod \"glance-db-create-pjkn6\" (UID: \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\") " pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.805323 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr256\" (UniqueName: \"kubernetes.io/projected/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-kube-api-access-gr256\") pod \"glance-db-create-pjkn6\" (UID: \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\") " pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.919380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-operator-scripts\") pod \"glance-db-create-pjkn6\" (UID: \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\") " pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.919516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr256\" (UniqueName: \"kubernetes.io/projected/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-kube-api-access-gr256\") pod \"glance-db-create-pjkn6\" (UID: \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\") " pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.920271 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-operator-scripts\") pod \"glance-db-create-pjkn6\" (UID: \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\") " pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.942084 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-c94f-account-create-update-f8wfc"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.942975 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.947758 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.954900 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-c94f-account-create-update-f8wfc"] Jan 31 07:11:29 crc kubenswrapper[4687]: I0131 07:11:29.962167 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr256\" (UniqueName: \"kubernetes.io/projected/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-kube-api-access-gr256\") pod \"glance-db-create-pjkn6\" (UID: \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\") " pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.020511 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w8n5\" (UniqueName: \"kubernetes.io/projected/6f6324d3-a530-4ce5-b00a-6f77fd585509-kube-api-access-2w8n5\") pod \"glance-c94f-account-create-update-f8wfc\" (UID: \"6f6324d3-a530-4ce5-b00a-6f77fd585509\") " pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.020698 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6324d3-a530-4ce5-b00a-6f77fd585509-operator-scripts\") pod \"glance-c94f-account-create-update-f8wfc\" (UID: \"6f6324d3-a530-4ce5-b00a-6f77fd585509\") " pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.080007 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.122541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w8n5\" (UniqueName: \"kubernetes.io/projected/6f6324d3-a530-4ce5-b00a-6f77fd585509-kube-api-access-2w8n5\") pod \"glance-c94f-account-create-update-f8wfc\" (UID: \"6f6324d3-a530-4ce5-b00a-6f77fd585509\") " pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.122620 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6324d3-a530-4ce5-b00a-6f77fd585509-operator-scripts\") pod \"glance-c94f-account-create-update-f8wfc\" (UID: \"6f6324d3-a530-4ce5-b00a-6f77fd585509\") " pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.124478 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6324d3-a530-4ce5-b00a-6f77fd585509-operator-scripts\") pod \"glance-c94f-account-create-update-f8wfc\" (UID: \"6f6324d3-a530-4ce5-b00a-6f77fd585509\") " pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.142024 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w8n5\" (UniqueName: \"kubernetes.io/projected/6f6324d3-a530-4ce5-b00a-6f77fd585509-kube-api-access-2w8n5\") pod \"glance-c94f-account-create-update-f8wfc\" (UID: \"6f6324d3-a530-4ce5-b00a-6f77fd585509\") " pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.193019 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrz8z"] Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.263751 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.520963 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-pjkn6"] Jan 31 07:11:30 crc kubenswrapper[4687]: I0131 07:11:30.547677 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-c94f-account-create-update-f8wfc"] Jan 31 07:11:31 crc kubenswrapper[4687]: I0131 07:11:31.381219 4687 generic.go:334] "Generic (PLEG): container finished" podID="ad5f32b0-dd27-4c63-91a6-a63cb5bf5452" containerID="b423851f07145c278cf65cf0c5aa4a0713dc23590a914473207968afa66ca330" exitCode=0 Jan 31 07:11:31 crc kubenswrapper[4687]: I0131 07:11:31.381286 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-pjkn6" event={"ID":"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452","Type":"ContainerDied","Data":"b423851f07145c278cf65cf0c5aa4a0713dc23590a914473207968afa66ca330"} Jan 31 07:11:31 crc kubenswrapper[4687]: I0131 07:11:31.382589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-pjkn6" event={"ID":"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452","Type":"ContainerStarted","Data":"b1d6c67c01c33e4ff83dd9ba8810d3998ec13827cd7d3a2d66bcdf71e7f3ade8"} Jan 31 07:11:31 crc kubenswrapper[4687]: I0131 07:11:31.384455 4687 generic.go:334] "Generic (PLEG): container finished" podID="6f6324d3-a530-4ce5-b00a-6f77fd585509" containerID="ad697f91df00467b940dec87a53e70442fd387730467cfd68d64a9fbafcaff87" exitCode=0 Jan 31 07:11:31 crc kubenswrapper[4687]: I0131 07:11:31.384541 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" event={"ID":"6f6324d3-a530-4ce5-b00a-6f77fd585509","Type":"ContainerDied","Data":"ad697f91df00467b940dec87a53e70442fd387730467cfd68d64a9fbafcaff87"} Jan 31 07:11:31 crc kubenswrapper[4687]: I0131 07:11:31.384589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" event={"ID":"6f6324d3-a530-4ce5-b00a-6f77fd585509","Type":"ContainerStarted","Data":"c6c4cea2fcc7a2261e2abb8bb03685608ccf03aec314026644c7a27eca319b71"} Jan 31 07:11:31 crc kubenswrapper[4687]: I0131 07:11:31.384650 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nrz8z" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerName="registry-server" containerID="cri-o://34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39" gracePeriod=2 Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.029679 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.048773 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-catalog-content\") pod \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.048834 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzcr6\" (UniqueName: \"kubernetes.io/projected/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-kube-api-access-nzcr6\") pod \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.048890 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-utilities\") pod \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\" (UID: \"3b11de24-19a1-4ee8-a0e1-7688c1f743b7\") " Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.050053 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-utilities" (OuterVolumeSpecName: "utilities") pod "3b11de24-19a1-4ee8-a0e1-7688c1f743b7" (UID: "3b11de24-19a1-4ee8-a0e1-7688c1f743b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.057595 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-kube-api-access-nzcr6" (OuterVolumeSpecName: "kube-api-access-nzcr6") pod "3b11de24-19a1-4ee8-a0e1-7688c1f743b7" (UID: "3b11de24-19a1-4ee8-a0e1-7688c1f743b7"). InnerVolumeSpecName "kube-api-access-nzcr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.115675 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b11de24-19a1-4ee8-a0e1-7688c1f743b7" (UID: "3b11de24-19a1-4ee8-a0e1-7688c1f743b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.150620 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzcr6\" (UniqueName: \"kubernetes.io/projected/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-kube-api-access-nzcr6\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.150905 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.150915 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b11de24-19a1-4ee8-a0e1-7688c1f743b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.402578 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerID="34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39" exitCode=0 Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.402652 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrz8z" event={"ID":"3b11de24-19a1-4ee8-a0e1-7688c1f743b7","Type":"ContainerDied","Data":"34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39"} Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.402714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrz8z" event={"ID":"3b11de24-19a1-4ee8-a0e1-7688c1f743b7","Type":"ContainerDied","Data":"9bd5351b222a86f48b83837aadbb25b2fdbd8e7368c994e9b67ab1b7c3a75593"} Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.402738 4687 scope.go:117] "RemoveContainer" containerID="34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.402829 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrz8z" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.422322 4687 scope.go:117] "RemoveContainer" containerID="9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.440039 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrz8z"] Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.463201 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrz8z"] Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.474880 4687 scope.go:117] "RemoveContainer" containerID="d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.522695 4687 scope.go:117] "RemoveContainer" containerID="34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39" Jan 31 07:11:32 crc kubenswrapper[4687]: E0131 07:11:32.523718 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39\": container with ID starting with 34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39 not found: ID does not exist" containerID="34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.523765 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39"} err="failed to get container status \"34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39\": rpc error: code = NotFound desc = could not find container \"34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39\": container with ID starting with 34e3b1330a785dd066f89f861017cc5d49a614cbfe40c44aa561335f95746a39 not found: ID does not exist" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.523802 4687 scope.go:117] "RemoveContainer" containerID="9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5" Jan 31 07:11:32 crc kubenswrapper[4687]: E0131 07:11:32.525707 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5\": container with ID starting with 9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5 not found: ID does not exist" containerID="9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.525745 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5"} err="failed to get container status \"9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5\": rpc error: code = NotFound desc = could not find container \"9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5\": container with ID starting with 9d38596a6445809da7827b3ddcf0fd48c0c3e4e2e30d77d93dabab4af7200ac5 not found: ID does not exist" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.525773 4687 scope.go:117] "RemoveContainer" containerID="d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1" Jan 31 07:11:32 crc kubenswrapper[4687]: E0131 07:11:32.526653 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1\": container with ID starting with d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1 not found: ID does not exist" containerID="d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.526685 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1"} err="failed to get container status \"d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1\": rpc error: code = NotFound desc = could not find container \"d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1\": container with ID starting with d8db071a489c567f6f62f7e5890684b32372499685b7de9ed570e55590513ea1 not found: ID does not exist" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.804238 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.809665 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.863086 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr256\" (UniqueName: \"kubernetes.io/projected/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-kube-api-access-gr256\") pod \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\" (UID: \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\") " Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.863169 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-operator-scripts\") pod \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\" (UID: \"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452\") " Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.863241 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6324d3-a530-4ce5-b00a-6f77fd585509-operator-scripts\") pod \"6f6324d3-a530-4ce5-b00a-6f77fd585509\" (UID: \"6f6324d3-a530-4ce5-b00a-6f77fd585509\") " Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.863265 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w8n5\" (UniqueName: \"kubernetes.io/projected/6f6324d3-a530-4ce5-b00a-6f77fd585509-kube-api-access-2w8n5\") pod \"6f6324d3-a530-4ce5-b00a-6f77fd585509\" (UID: \"6f6324d3-a530-4ce5-b00a-6f77fd585509\") " Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.864162 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6324d3-a530-4ce5-b00a-6f77fd585509-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f6324d3-a530-4ce5-b00a-6f77fd585509" (UID: "6f6324d3-a530-4ce5-b00a-6f77fd585509"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.864202 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad5f32b0-dd27-4c63-91a6-a63cb5bf5452" (UID: "ad5f32b0-dd27-4c63-91a6-a63cb5bf5452"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.867634 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-kube-api-access-gr256" (OuterVolumeSpecName: "kube-api-access-gr256") pod "ad5f32b0-dd27-4c63-91a6-a63cb5bf5452" (UID: "ad5f32b0-dd27-4c63-91a6-a63cb5bf5452"). InnerVolumeSpecName "kube-api-access-gr256". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.870490 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6324d3-a530-4ce5-b00a-6f77fd585509-kube-api-access-2w8n5" (OuterVolumeSpecName: "kube-api-access-2w8n5") pod "6f6324d3-a530-4ce5-b00a-6f77fd585509" (UID: "6f6324d3-a530-4ce5-b00a-6f77fd585509"). InnerVolumeSpecName "kube-api-access-2w8n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.965219 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6324d3-a530-4ce5-b00a-6f77fd585509-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.965253 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w8n5\" (UniqueName: \"kubernetes.io/projected/6f6324d3-a530-4ce5-b00a-6f77fd585509-kube-api-access-2w8n5\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.965266 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr256\" (UniqueName: \"kubernetes.io/projected/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-kube-api-access-gr256\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:32 crc kubenswrapper[4687]: I0131 07:11:32.965296 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:33 crc kubenswrapper[4687]: I0131 07:11:33.412332 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-pjkn6" event={"ID":"ad5f32b0-dd27-4c63-91a6-a63cb5bf5452","Type":"ContainerDied","Data":"b1d6c67c01c33e4ff83dd9ba8810d3998ec13827cd7d3a2d66bcdf71e7f3ade8"} Jan 31 07:11:33 crc kubenswrapper[4687]: I0131 07:11:33.412382 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d6c67c01c33e4ff83dd9ba8810d3998ec13827cd7d3a2d66bcdf71e7f3ade8" Jan 31 07:11:33 crc kubenswrapper[4687]: I0131 07:11:33.412671 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-pjkn6" Jan 31 07:11:33 crc kubenswrapper[4687]: I0131 07:11:33.415791 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" event={"ID":"6f6324d3-a530-4ce5-b00a-6f77fd585509","Type":"ContainerDied","Data":"c6c4cea2fcc7a2261e2abb8bb03685608ccf03aec314026644c7a27eca319b71"} Jan 31 07:11:33 crc kubenswrapper[4687]: I0131 07:11:33.415824 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6c4cea2fcc7a2261e2abb8bb03685608ccf03aec314026644c7a27eca319b71" Jan 31 07:11:33 crc kubenswrapper[4687]: I0131 07:11:33.415805 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-c94f-account-create-update-f8wfc" Jan 31 07:11:33 crc kubenswrapper[4687]: I0131 07:11:33.613292 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" path="/var/lib/kubelet/pods/3b11de24-19a1-4ee8-a0e1-7688c1f743b7/volumes" Jan 31 07:11:34 crc kubenswrapper[4687]: I0131 07:11:34.274618 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:34 crc kubenswrapper[4687]: I0131 07:11:34.274977 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:34 crc kubenswrapper[4687]: I0131 07:11:34.314545 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:34 crc kubenswrapper[4687]: I0131 07:11:34.478309 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026082 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-7xmk6"] Jan 31 07:11:35 crc kubenswrapper[4687]: E0131 07:11:35.026370 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6324d3-a530-4ce5-b00a-6f77fd585509" containerName="mariadb-account-create-update" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026389 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6324d3-a530-4ce5-b00a-6f77fd585509" containerName="mariadb-account-create-update" Jan 31 07:11:35 crc kubenswrapper[4687]: E0131 07:11:35.026401 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerName="registry-server" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026431 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerName="registry-server" Jan 31 07:11:35 crc kubenswrapper[4687]: E0131 07:11:35.026445 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerName="extract-utilities" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026451 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerName="extract-utilities" Jan 31 07:11:35 crc kubenswrapper[4687]: E0131 07:11:35.026465 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerName="extract-content" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026472 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerName="extract-content" Jan 31 07:11:35 crc kubenswrapper[4687]: E0131 07:11:35.026483 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5f32b0-dd27-4c63-91a6-a63cb5bf5452" containerName="mariadb-database-create" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026489 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5f32b0-dd27-4c63-91a6-a63cb5bf5452" containerName="mariadb-database-create" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026615 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6324d3-a530-4ce5-b00a-6f77fd585509" containerName="mariadb-account-create-update" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026631 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5f32b0-dd27-4c63-91a6-a63cb5bf5452" containerName="mariadb-database-create" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.026642 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b11de24-19a1-4ee8-a0e1-7688c1f743b7" containerName="registry-server" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.027080 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.030011 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.030203 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-cxptj" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.041136 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-7xmk6"] Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.094357 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-config-data\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.094547 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-db-sync-config-data\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.094615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szlg8\" (UniqueName: \"kubernetes.io/projected/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-kube-api-access-szlg8\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.196457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-config-data\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.196551 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-db-sync-config-data\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.196599 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szlg8\" (UniqueName: \"kubernetes.io/projected/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-kube-api-access-szlg8\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.201433 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-db-sync-config-data\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.201478 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-config-data\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.216857 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szlg8\" (UniqueName: \"kubernetes.io/projected/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-kube-api-access-szlg8\") pod \"glance-db-sync-7xmk6\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.342703 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.386321 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ch7fd"] Jan 31 07:11:35 crc kubenswrapper[4687]: I0131 07:11:35.575012 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-7xmk6"] Jan 31 07:11:36 crc kubenswrapper[4687]: I0131 07:11:36.454168 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-7xmk6" event={"ID":"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d","Type":"ContainerStarted","Data":"40ca971aa41d8a2a56c8b413d0ce335f0ab64b257eb404beb72f9e78baba2807"} Jan 31 07:11:36 crc kubenswrapper[4687]: I0131 07:11:36.454517 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-7xmk6" event={"ID":"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d","Type":"ContainerStarted","Data":"3e2e56cdfb22186f60a0ebfeed3e08ac27e71032724bdba4a62e74656b0f78d8"} Jan 31 07:11:36 crc kubenswrapper[4687]: I0131 07:11:36.454466 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ch7fd" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerName="registry-server" containerID="cri-o://e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b" gracePeriod=2 Jan 31 07:11:36 crc kubenswrapper[4687]: I0131 07:11:36.480035 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-7xmk6" podStartSLOduration=1.480014371 podStartE2EDuration="1.480014371s" podCreationTimestamp="2026-01-31 07:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:11:36.473244726 +0000 UTC m=+1722.750504311" watchObservedRunningTime="2026-01-31 07:11:36.480014371 +0000 UTC m=+1722.757273946" Jan 31 07:11:36 crc kubenswrapper[4687]: I0131 07:11:36.943997 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.027488 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-catalog-content\") pod \"f6a903e2-5caa-405b-a762-5f619e51cd76\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.027599 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-utilities\") pod \"f6a903e2-5caa-405b-a762-5f619e51cd76\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.027669 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfbn9\" (UniqueName: \"kubernetes.io/projected/f6a903e2-5caa-405b-a762-5f619e51cd76-kube-api-access-mfbn9\") pod \"f6a903e2-5caa-405b-a762-5f619e51cd76\" (UID: \"f6a903e2-5caa-405b-a762-5f619e51cd76\") " Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.030072 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-utilities" (OuterVolumeSpecName: "utilities") pod "f6a903e2-5caa-405b-a762-5f619e51cd76" (UID: "f6a903e2-5caa-405b-a762-5f619e51cd76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.046357 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a903e2-5caa-405b-a762-5f619e51cd76-kube-api-access-mfbn9" (OuterVolumeSpecName: "kube-api-access-mfbn9") pod "f6a903e2-5caa-405b-a762-5f619e51cd76" (UID: "f6a903e2-5caa-405b-a762-5f619e51cd76"). InnerVolumeSpecName "kube-api-access-mfbn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.078087 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6a903e2-5caa-405b-a762-5f619e51cd76" (UID: "f6a903e2-5caa-405b-a762-5f619e51cd76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.129561 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.129606 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a903e2-5caa-405b-a762-5f619e51cd76-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.129618 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfbn9\" (UniqueName: \"kubernetes.io/projected/f6a903e2-5caa-405b-a762-5f619e51cd76-kube-api-access-mfbn9\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.462243 4687 generic.go:334] "Generic (PLEG): container finished" podID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerID="e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b" exitCode=0 Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.462465 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7fd" event={"ID":"f6a903e2-5caa-405b-a762-5f619e51cd76","Type":"ContainerDied","Data":"e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b"} Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.462702 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7fd" event={"ID":"f6a903e2-5caa-405b-a762-5f619e51cd76","Type":"ContainerDied","Data":"29e2ae6f32dd36067c4627aef2c4f1fbaab1e696d27c4a5af7c709989eac4332"} Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.462527 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch7fd" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.462722 4687 scope.go:117] "RemoveContainer" containerID="e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.483567 4687 scope.go:117] "RemoveContainer" containerID="89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.506016 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ch7fd"] Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.512550 4687 scope.go:117] "RemoveContainer" containerID="d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.513504 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ch7fd"] Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.532015 4687 scope.go:117] "RemoveContainer" containerID="e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b" Jan 31 07:11:37 crc kubenswrapper[4687]: E0131 07:11:37.532578 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b\": container with ID starting with e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b not found: ID does not exist" containerID="e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.532624 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b"} err="failed to get container status \"e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b\": rpc error: code = NotFound desc = could not find container \"e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b\": container with ID starting with e1608db3836500d4148927f291047e06ef8c0907bb8ba9997f65cded4b72965b not found: ID does not exist" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.532658 4687 scope.go:117] "RemoveContainer" containerID="89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139" Jan 31 07:11:37 crc kubenswrapper[4687]: E0131 07:11:37.533230 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139\": container with ID starting with 89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139 not found: ID does not exist" containerID="89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.533264 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139"} err="failed to get container status \"89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139\": rpc error: code = NotFound desc = could not find container \"89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139\": container with ID starting with 89436ca9077ef42e9a18ce2b3375c20c38592f1602abc33059e35a28d0105139 not found: ID does not exist" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.533290 4687 scope.go:117] "RemoveContainer" containerID="d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e" Jan 31 07:11:37 crc kubenswrapper[4687]: E0131 07:11:37.533700 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e\": container with ID starting with d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e not found: ID does not exist" containerID="d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.533734 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e"} err="failed to get container status \"d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e\": rpc error: code = NotFound desc = could not find container \"d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e\": container with ID starting with d3e28a87aa12212c4c83a07801b6ae9d58b6f808e564d838a519304aaef2f77e not found: ID does not exist" Jan 31 07:11:37 crc kubenswrapper[4687]: I0131 07:11:37.612626 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" path="/var/lib/kubelet/pods/f6a903e2-5caa-405b-a762-5f619e51cd76/volumes" Jan 31 07:11:40 crc kubenswrapper[4687]: I0131 07:11:40.495284 4687 generic.go:334] "Generic (PLEG): container finished" podID="19c6086f-95b9-43e6-94bc-b8bb8a35fa6d" containerID="40ca971aa41d8a2a56c8b413d0ce335f0ab64b257eb404beb72f9e78baba2807" exitCode=0 Jan 31 07:11:40 crc kubenswrapper[4687]: I0131 07:11:40.495362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-7xmk6" event={"ID":"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d","Type":"ContainerDied","Data":"40ca971aa41d8a2a56c8b413d0ce335f0ab64b257eb404beb72f9e78baba2807"} Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.837175 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.860512 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-config-data\") pod \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.860634 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-db-sync-config-data\") pod \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.860661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szlg8\" (UniqueName: \"kubernetes.io/projected/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-kube-api-access-szlg8\") pod \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\" (UID: \"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d\") " Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.867083 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-kube-api-access-szlg8" (OuterVolumeSpecName: "kube-api-access-szlg8") pod "19c6086f-95b9-43e6-94bc-b8bb8a35fa6d" (UID: "19c6086f-95b9-43e6-94bc-b8bb8a35fa6d"). InnerVolumeSpecName "kube-api-access-szlg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.873158 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "19c6086f-95b9-43e6-94bc-b8bb8a35fa6d" (UID: "19c6086f-95b9-43e6-94bc-b8bb8a35fa6d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.901829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-config-data" (OuterVolumeSpecName: "config-data") pod "19c6086f-95b9-43e6-94bc-b8bb8a35fa6d" (UID: "19c6086f-95b9-43e6-94bc-b8bb8a35fa6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.961992 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.962030 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szlg8\" (UniqueName: \"kubernetes.io/projected/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-kube-api-access-szlg8\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:41 crc kubenswrapper[4687]: I0131 07:11:41.962041 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:42 crc kubenswrapper[4687]: I0131 07:11:42.514307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-7xmk6" event={"ID":"19c6086f-95b9-43e6-94bc-b8bb8a35fa6d","Type":"ContainerDied","Data":"3e2e56cdfb22186f60a0ebfeed3e08ac27e71032724bdba4a62e74656b0f78d8"} Jan 31 07:11:42 crc kubenswrapper[4687]: I0131 07:11:42.514349 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e2e56cdfb22186f60a0ebfeed3e08ac27e71032724bdba4a62e74656b0f78d8" Jan 31 07:11:42 crc kubenswrapper[4687]: I0131 07:11:42.514427 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-7xmk6" Jan 31 07:11:42 crc kubenswrapper[4687]: I0131 07:11:42.604074 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:11:42 crc kubenswrapper[4687]: E0131 07:11:42.604367 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.799827 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:11:43 crc kubenswrapper[4687]: E0131 07:11:43.800943 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerName="extract-utilities" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.801031 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerName="extract-utilities" Jan 31 07:11:43 crc kubenswrapper[4687]: E0131 07:11:43.801089 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerName="registry-server" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.801141 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerName="registry-server" Jan 31 07:11:43 crc kubenswrapper[4687]: E0131 07:11:43.801200 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c6086f-95b9-43e6-94bc-b8bb8a35fa6d" containerName="glance-db-sync" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.801248 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c6086f-95b9-43e6-94bc-b8bb8a35fa6d" containerName="glance-db-sync" Jan 31 07:11:43 crc kubenswrapper[4687]: E0131 07:11:43.801323 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerName="extract-content" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.801379 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerName="extract-content" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.801567 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a903e2-5caa-405b-a762-5f619e51cd76" containerName="registry-server" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.801643 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c6086f-95b9-43e6-94bc-b8bb8a35fa6d" containerName="glance-db-sync" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.802470 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.806083 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.806533 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-cxptj" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.807139 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.812161 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.887662 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.890777 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.892924 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.919828 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.988994 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989064 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989088 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989151 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-run\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989177 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989223 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989242 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-sys\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989260 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-logs\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989278 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbh96\" (UniqueName: \"kubernetes.io/projected/dd530881-31d1-4d14-a877-2826adf94b2c-kube-api-access-cbh96\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989297 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-dev\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989315 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:43 crc kubenswrapper[4687]: I0131 07:11:43.989337 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091176 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-run\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091259 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091287 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcbt\" (UniqueName: \"kubernetes.io/projected/ba1ed49b-061b-4d49-8755-edc6a84fd049-kube-api-access-kwcbt\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091316 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-run\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091343 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091475 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091475 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091529 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091552 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091592 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-run\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091633 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091705 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-sys\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091760 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-logs\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091793 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-sys\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091814 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbh96\" (UniqueName: \"kubernetes.io/projected/dd530881-31d1-4d14-a877-2826adf94b2c-kube-api-access-cbh96\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091847 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.091842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-dev\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092282 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-dev\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092332 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-logs\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092469 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-sys\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092652 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092689 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092714 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-dev\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092742 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092772 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.092820 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.093188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.093241 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.093238 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") device mount path \"/mnt/openstack/pv20\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.093351 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.096223 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.101676 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.115903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.118679 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.118722 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbh96\" (UniqueName: \"kubernetes.io/projected/dd530881-31d1-4d14-a877-2826adf94b2c-kube-api-access-cbh96\") pod \"glance-default-external-api-0\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.194570 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.194634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.194664 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.194682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwcbt\" (UniqueName: \"kubernetes.io/projected/ba1ed49b-061b-4d49-8755-edc6a84fd049-kube-api-access-kwcbt\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.194682 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.194725 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.194698 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.194970 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195009 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-run\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195091 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195208 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195244 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-sys\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195317 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195403 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-dev\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-dev\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195627 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195670 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-run\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195722 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.195985 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-sys\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.196135 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.196238 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.196344 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.205111 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.206557 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.214292 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwcbt\" (UniqueName: \"kubernetes.io/projected/ba1ed49b-061b-4d49-8755-edc6a84fd049-kube-api-access-kwcbt\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.216136 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.230768 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.420086 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.522964 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.644753 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:11:44 crc kubenswrapper[4687]: I0131 07:11:44.759341 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.016471 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:45 crc kubenswrapper[4687]: W0131 07:11:45.037063 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba1ed49b_061b_4d49_8755_edc6a84fd049.slice/crio-f7afe50aff5ca47ff9b9dc52a57180193134edd17e6c4c58483b66f8ef8a1ebc WatchSource:0}: Error finding container f7afe50aff5ca47ff9b9dc52a57180193134edd17e6c4c58483b66f8ef8a1ebc: Status 404 returned error can't find the container with id f7afe50aff5ca47ff9b9dc52a57180193134edd17e6c4c58483b66f8ef8a1ebc Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.539947 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ba1ed49b-061b-4d49-8755-edc6a84fd049","Type":"ContainerStarted","Data":"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b"} Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.540024 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerName="glance-log" containerID="cri-o://e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d" gracePeriod=30 Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.540673 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ba1ed49b-061b-4d49-8755-edc6a84fd049","Type":"ContainerStarted","Data":"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d"} Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.540696 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ba1ed49b-061b-4d49-8755-edc6a84fd049","Type":"ContainerStarted","Data":"f7afe50aff5ca47ff9b9dc52a57180193134edd17e6c4c58483b66f8ef8a1ebc"} Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.540091 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerName="glance-httpd" containerID="cri-o://78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b" gracePeriod=30 Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.541759 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"dd530881-31d1-4d14-a877-2826adf94b2c","Type":"ContainerStarted","Data":"4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683"} Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.543362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"dd530881-31d1-4d14-a877-2826adf94b2c","Type":"ContainerStarted","Data":"36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd"} Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.543390 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"dd530881-31d1-4d14-a877-2826adf94b2c","Type":"ContainerStarted","Data":"c18370cf3e22253905b8b08ac5985c2dd80647305231a9855f29ffb3299bb81b"} Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.574792 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=3.574771445 podStartE2EDuration="3.574771445s" podCreationTimestamp="2026-01-31 07:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:11:45.565966694 +0000 UTC m=+1731.843226279" watchObservedRunningTime="2026-01-31 07:11:45.574771445 +0000 UTC m=+1731.852031020" Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.617229 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.617212615 podStartE2EDuration="2.617212615s" podCreationTimestamp="2026-01-31 07:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:11:45.613589636 +0000 UTC m=+1731.890849211" watchObservedRunningTime="2026-01-31 07:11:45.617212615 +0000 UTC m=+1731.894472180" Jan 31 07:11:45 crc kubenswrapper[4687]: I0131 07:11:45.903194 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.024895 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwcbt\" (UniqueName: \"kubernetes.io/projected/ba1ed49b-061b-4d49-8755-edc6a84fd049-kube-api-access-kwcbt\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.024935 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-run\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025040 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-run" (OuterVolumeSpecName: "run") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025077 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-var-locks-brick\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025135 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-sys\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025146 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025181 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025204 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-logs\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025241 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-httpd-run\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025261 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-config-data\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-sys" (OuterVolumeSpecName: "sys") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025318 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-lib-modules\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025361 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-dev\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025380 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-scripts\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025396 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-iscsi\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025430 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025474 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-nvme\") pod \"ba1ed49b-061b-4d49-8755-edc6a84fd049\" (UID: \"ba1ed49b-061b-4d49-8755-edc6a84fd049\") " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025626 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-logs" (OuterVolumeSpecName: "logs") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025672 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025848 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025872 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.025898 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-dev" (OuterVolumeSpecName: "dev") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026029 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026052 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026062 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026072 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026083 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026094 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026104 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ba1ed49b-061b-4d49-8755-edc6a84fd049-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026113 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.026124 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1ed49b-061b-4d49-8755-edc6a84fd049-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.030532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-scripts" (OuterVolumeSpecName: "scripts") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.030767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.031521 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1ed49b-061b-4d49-8755-edc6a84fd049-kube-api-access-kwcbt" (OuterVolumeSpecName: "kube-api-access-kwcbt") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "kube-api-access-kwcbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.032388 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance-cache") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.067556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-config-data" (OuterVolumeSpecName: "config-data") pod "ba1ed49b-061b-4d49-8755-edc6a84fd049" (UID: "ba1ed49b-061b-4d49-8755-edc6a84fd049"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.127913 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.127967 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.127981 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1ed49b-061b-4d49-8755-edc6a84fd049-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.128009 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.128023 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwcbt\" (UniqueName: \"kubernetes.io/projected/ba1ed49b-061b-4d49-8755-edc6a84fd049-kube-api-access-kwcbt\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.143401 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.147894 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.228875 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.228914 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.557891 4687 generic.go:334] "Generic (PLEG): container finished" podID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerID="78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b" exitCode=143 Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.558092 4687 generic.go:334] "Generic (PLEG): container finished" podID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerID="e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d" exitCode=143 Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.558139 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.558020 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ba1ed49b-061b-4d49-8755-edc6a84fd049","Type":"ContainerDied","Data":"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b"} Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.558362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ba1ed49b-061b-4d49-8755-edc6a84fd049","Type":"ContainerDied","Data":"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d"} Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.558396 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ba1ed49b-061b-4d49-8755-edc6a84fd049","Type":"ContainerDied","Data":"f7afe50aff5ca47ff9b9dc52a57180193134edd17e6c4c58483b66f8ef8a1ebc"} Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.558436 4687 scope.go:117] "RemoveContainer" containerID="78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.578290 4687 scope.go:117] "RemoveContainer" containerID="e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.596056 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.602839 4687 scope.go:117] "RemoveContainer" containerID="78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b" Jan 31 07:11:46 crc kubenswrapper[4687]: E0131 07:11:46.603637 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b\": container with ID starting with 78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b not found: ID does not exist" containerID="78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.603748 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b"} err="failed to get container status \"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b\": rpc error: code = NotFound desc = could not find container \"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b\": container with ID starting with 78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b not found: ID does not exist" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.603836 4687 scope.go:117] "RemoveContainer" containerID="e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d" Jan 31 07:11:46 crc kubenswrapper[4687]: E0131 07:11:46.604283 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d\": container with ID starting with e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d not found: ID does not exist" containerID="e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.604330 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d"} err="failed to get container status \"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d\": rpc error: code = NotFound desc = could not find container \"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d\": container with ID starting with e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d not found: ID does not exist" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.604358 4687 scope.go:117] "RemoveContainer" containerID="78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.604747 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b"} err="failed to get container status \"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b\": rpc error: code = NotFound desc = could not find container \"78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b\": container with ID starting with 78bf7fcbd4151a94018b71648e6144d404ea21667337ffdf55e493e109d8200b not found: ID does not exist" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.604773 4687 scope.go:117] "RemoveContainer" containerID="e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.605041 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d"} err="failed to get container status \"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d\": rpc error: code = NotFound desc = could not find container \"e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d\": container with ID starting with e9ed82d19154cad7a1509351ac419c4d81b22aab6a13e3f2166883aff985859d not found: ID does not exist" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.606749 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.631013 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:46 crc kubenswrapper[4687]: E0131 07:11:46.631659 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerName="glance-httpd" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.631785 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerName="glance-httpd" Jan 31 07:11:46 crc kubenswrapper[4687]: E0131 07:11:46.631810 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerName="glance-log" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.631816 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerName="glance-log" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.631952 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerName="glance-httpd" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.631983 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" containerName="glance-log" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.633346 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.637449 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.642804 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.737494 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.737599 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.737645 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-logs\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.737675 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.737708 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.737732 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-run\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.737770 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.737900 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.738012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.738096 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-dev\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.738194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfs9\" (UniqueName: \"kubernetes.io/projected/c13f92f0-6f82-491f-8e93-f2805292edf9-kube-api-access-pvfs9\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.738270 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.738436 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.738615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-sys\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-dev\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840636 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvfs9\" (UniqueName: \"kubernetes.io/projected/c13f92f0-6f82-491f-8e93-f2805292edf9-kube-api-access-pvfs9\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840658 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840718 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-sys\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840811 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-logs\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840873 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840884 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-dev\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840894 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-sys\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840974 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-run\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840998 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-run\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.840931 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841020 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841057 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841360 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841545 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-logs\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841543 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841744 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841773 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.841973 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.846523 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.847654 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.856746 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvfs9\" (UniqueName: \"kubernetes.io/projected/c13f92f0-6f82-491f-8e93-f2805292edf9-kube-api-access-pvfs9\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.868402 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.872974 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-0\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:46 crc kubenswrapper[4687]: I0131 07:11:46.968908 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:47 crc kubenswrapper[4687]: I0131 07:11:47.466286 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:11:47 crc kubenswrapper[4687]: I0131 07:11:47.568185 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c13f92f0-6f82-491f-8e93-f2805292edf9","Type":"ContainerStarted","Data":"152dd6728465edc4ab3d791db475c7d12beb43d75b0f9c3b4bb1743bf743e164"} Jan 31 07:11:47 crc kubenswrapper[4687]: I0131 07:11:47.614519 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1ed49b-061b-4d49-8755-edc6a84fd049" path="/var/lib/kubelet/pods/ba1ed49b-061b-4d49-8755-edc6a84fd049/volumes" Jan 31 07:11:48 crc kubenswrapper[4687]: I0131 07:11:48.578472 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c13f92f0-6f82-491f-8e93-f2805292edf9","Type":"ContainerStarted","Data":"d3ff90f4d8350a12b77e89e6ce885e9bee193d32d846dbd3ac2299f7a34ef444"} Jan 31 07:11:48 crc kubenswrapper[4687]: I0131 07:11:48.578949 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c13f92f0-6f82-491f-8e93-f2805292edf9","Type":"ContainerStarted","Data":"363c2e7238686b625de2a6c914c64758028211989df4ff170d72f3eccf89e90e"} Jan 31 07:11:48 crc kubenswrapper[4687]: I0131 07:11:48.603927 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.603905937 podStartE2EDuration="2.603905937s" podCreationTimestamp="2026-01-31 07:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:11:48.599231329 +0000 UTC m=+1734.876490954" watchObservedRunningTime="2026-01-31 07:11:48.603905937 +0000 UTC m=+1734.881165512" Jan 31 07:11:52 crc kubenswrapper[4687]: I0131 07:11:52.278551 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-api" probeResult="failure" output="Get \"http://10.217.0.116:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:11:52 crc kubenswrapper[4687]: I0131 07:11:52.278766 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.116:9292/healthcheck\": dial tcp 10.217.0.116:9292: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 31 07:11:52 crc kubenswrapper[4687]: I0131 07:11:52.278806 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="4909dbe9-535d-4581-b009-7c3cb0856689" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.116:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:11:54 crc kubenswrapper[4687]: I0131 07:11:54.420609 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:54 crc kubenswrapper[4687]: I0131 07:11:54.423159 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:54 crc kubenswrapper[4687]: I0131 07:11:54.442278 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:54 crc kubenswrapper[4687]: I0131 07:11:54.462958 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:54 crc kubenswrapper[4687]: I0131 07:11:54.620269 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:54 crc kubenswrapper[4687]: I0131 07:11:54.624441 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:55 crc kubenswrapper[4687]: I0131 07:11:55.610719 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:11:55 crc kubenswrapper[4687]: E0131 07:11:55.611455 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:11:56 crc kubenswrapper[4687]: I0131 07:11:56.614829 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:56 crc kubenswrapper[4687]: I0131 07:11:56.634622 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:11:56 crc kubenswrapper[4687]: I0131 07:11:56.652940 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:11:56 crc kubenswrapper[4687]: I0131 07:11:56.969159 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:56 crc kubenswrapper[4687]: I0131 07:11:56.969231 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:56 crc kubenswrapper[4687]: I0131 07:11:56.997467 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:57 crc kubenswrapper[4687]: I0131 07:11:57.024426 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:57 crc kubenswrapper[4687]: I0131 07:11:57.644380 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:57 crc kubenswrapper[4687]: I0131 07:11:57.644749 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:59 crc kubenswrapper[4687]: I0131 07:11:59.722988 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:11:59 crc kubenswrapper[4687]: I0131 07:11:59.723851 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:11:59 crc kubenswrapper[4687]: I0131 07:11:59.756731 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:12:01 crc kubenswrapper[4687]: I0131 07:12:01.910502 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:12:01 crc kubenswrapper[4687]: I0131 07:12:01.912112 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:01 crc kubenswrapper[4687]: I0131 07:12:01.923889 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:12:01 crc kubenswrapper[4687]: I0131 07:12:01.925116 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:01 crc kubenswrapper[4687]: I0131 07:12:01.931850 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:12:01 crc kubenswrapper[4687]: I0131 07:12:01.941270 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.007058 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.008360 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.023199 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.024370 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.032938 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.041184 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.083950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.083999 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084021 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgqvx\" (UniqueName: \"kubernetes.io/projected/111e167d-4141-4668-acd4-c83e49104f69-kube-api-access-tgqvx\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084037 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084063 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084092 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-sys\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084116 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084203 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-logs\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084220 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084235 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-run\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084261 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-sys\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084281 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084303 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084331 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-config-data\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084477 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n45b2\" (UniqueName: \"kubernetes.io/projected/2e34eda2-4099-4d6c-aba3-eb297216a9d5-kube-api-access-n45b2\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084502 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-logs\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084525 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-scripts\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084555 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-dev\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084578 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-scripts\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084602 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084632 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084657 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084677 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-config-data\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084750 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-dev\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.084826 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-run\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.185846 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-run\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.185895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-dev\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.185924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.185948 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.185997 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-run\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186029 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-sys\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186051 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgqvx\" (UniqueName: \"kubernetes.io/projected/111e167d-4141-4668-acd4-c83e49104f69-kube-api-access-tgqvx\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-run\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186156 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-config-data\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186222 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186318 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186715 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186823 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-logs\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-sys\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186957 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.186968 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-sys\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187048 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-logs\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-scripts\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187115 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-logs\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187115 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187132 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187228 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187229 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187262 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-run\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187231 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-run\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-sys\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187366 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-sys\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187455 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-sys\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187491 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187540 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187611 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187637 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-logs\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsdj\" (UniqueName: \"kubernetes.io/projected/4f238ff1-8922-4817-beec-c0cbb84ac763-kube-api-access-hmsdj\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187685 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187689 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187713 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187795 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-config-data\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187807 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187826 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187913 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187922 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgcxr\" (UniqueName: \"kubernetes.io/projected/8b97933a-3f30-4de3-bae4-4c366768a611-kube-api-access-cgcxr\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187949 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n45b2\" (UniqueName: \"kubernetes.io/projected/2e34eda2-4099-4d6c-aba3-eb297216a9d5-kube-api-access-n45b2\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187975 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-logs\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.187993 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-scripts\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188058 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-dev\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188081 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-run\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188101 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-config-data\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188123 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-scripts\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188143 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188315 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-logs\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188335 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188358 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-dev\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188502 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188646 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188708 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-dev\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-config-data\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188769 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188807 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-scripts\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.188907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-dev\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.189317 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-dev\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.189368 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.191450 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.192158 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.195534 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-scripts\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.195872 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-scripts\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.198775 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-config-data\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.202657 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-config-data\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.203025 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgqvx\" (UniqueName: \"kubernetes.io/projected/111e167d-4141-4668-acd4-c83e49104f69-kube-api-access-tgqvx\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.205296 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n45b2\" (UniqueName: \"kubernetes.io/projected/2e34eda2-4099-4d6c-aba3-eb297216a9d5-kube-api-access-n45b2\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.208737 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.211798 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.213172 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-1\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.214488 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-external-api-2\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.231681 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.246099 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290627 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-run\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-sys\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-config-data\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290763 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290781 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290838 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290859 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-run\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290866 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-logs\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290930 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290975 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-logs\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-scripts\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291440 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-sys\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291467 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291491 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291518 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmsdj\" (UniqueName: \"kubernetes.io/projected/4f238ff1-8922-4817-beec-c0cbb84ac763-kube-api-access-hmsdj\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291655 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291694 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291825 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgcxr\" (UniqueName: \"kubernetes.io/projected/8b97933a-3f30-4de3-bae4-4c366768a611-kube-api-access-cgcxr\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291846 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291873 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-run\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291887 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-config-data\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291895 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-logs\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291911 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.291974 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-dev\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-scripts\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292106 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292136 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-dev\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292172 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292325 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292765 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-logs\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.292834 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.293594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.293671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-run\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295229 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.290838 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-sys\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-sys\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295493 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295547 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295580 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295918 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295945 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-dev\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.295969 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-dev\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.296051 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") device mount path \"/mnt/openstack/pv19\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.297780 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-config-data\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.313468 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-scripts\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.314662 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-scripts\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.315351 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-config-data\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.316897 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgcxr\" (UniqueName: \"kubernetes.io/projected/8b97933a-3f30-4de3-bae4-4c366768a611-kube-api-access-cgcxr\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.323512 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.324379 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmsdj\" (UniqueName: \"kubernetes.io/projected/4f238ff1-8922-4817-beec-c0cbb84ac763-kube-api-access-hmsdj\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.341153 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.349035 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.376139 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-2\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.623390 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.644070 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.806083 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:12:02 crc kubenswrapper[4687]: I0131 07:12:02.831974 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:12:02 crc kubenswrapper[4687]: W0131 07:12:02.914259 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod111e167d_4141_4668_acd4_c83e49104f69.slice/crio-b93a07f3a7bf4e26218971cc2ef2031c07589c30024c297961e5acb61a76e9d9 WatchSource:0}: Error finding container b93a07f3a7bf4e26218971cc2ef2031c07589c30024c297961e5acb61a76e9d9: Status 404 returned error can't find the container with id b93a07f3a7bf4e26218971cc2ef2031c07589c30024c297961e5acb61a76e9d9 Jan 31 07:12:02 crc kubenswrapper[4687]: W0131 07:12:02.924286 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e34eda2_4099_4d6c_aba3_eb297216a9d5.slice/crio-3ee40a554397638991de5bf4d7dfd7041134a60cdf8b079323e834b803079684 WatchSource:0}: Error finding container 3ee40a554397638991de5bf4d7dfd7041134a60cdf8b079323e834b803079684: Status 404 returned error can't find the container with id 3ee40a554397638991de5bf4d7dfd7041134a60cdf8b079323e834b803079684 Jan 31 07:12:03 crc kubenswrapper[4687]: I0131 07:12:03.705673 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2e34eda2-4099-4d6c-aba3-eb297216a9d5","Type":"ContainerStarted","Data":"3ee40a554397638991de5bf4d7dfd7041134a60cdf8b079323e834b803079684"} Jan 31 07:12:03 crc kubenswrapper[4687]: I0131 07:12:03.710857 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:12:03 crc kubenswrapper[4687]: I0131 07:12:03.713842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"111e167d-4141-4668-acd4-c83e49104f69","Type":"ContainerStarted","Data":"b93a07f3a7bf4e26218971cc2ef2031c07589c30024c297961e5acb61a76e9d9"} Jan 31 07:12:03 crc kubenswrapper[4687]: I0131 07:12:03.729001 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:12:03 crc kubenswrapper[4687]: W0131 07:12:03.761794 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b97933a_3f30_4de3_bae4_4c366768a611.slice/crio-b8f28126dcf18d9a9e636e0e9ff36f301fb3f1802e12e876cc6383c174f96bb7 WatchSource:0}: Error finding container b8f28126dcf18d9a9e636e0e9ff36f301fb3f1802e12e876cc6383c174f96bb7: Status 404 returned error can't find the container with id b8f28126dcf18d9a9e636e0e9ff36f301fb3f1802e12e876cc6383c174f96bb7 Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.725553 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"111e167d-4141-4668-acd4-c83e49104f69","Type":"ContainerStarted","Data":"7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.726464 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"111e167d-4141-4668-acd4-c83e49104f69","Type":"ContainerStarted","Data":"c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.730398 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"4f238ff1-8922-4817-beec-c0cbb84ac763","Type":"ContainerStarted","Data":"d36f0f73478e857c1f416097a32aa1b15a35a69e0655513cd645c7a2e7d2c402"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.730485 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"4f238ff1-8922-4817-beec-c0cbb84ac763","Type":"ContainerStarted","Data":"4e3e04e975e0515d9f09bdf8a2d51d118b62270845b72c7e2ac0ea644cc75c0e"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.730504 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"4f238ff1-8922-4817-beec-c0cbb84ac763","Type":"ContainerStarted","Data":"1917e0f451138f3439bf610ec821db0f118fbe1cb80e7b9c16f7e84c79632323"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.732789 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2e34eda2-4099-4d6c-aba3-eb297216a9d5","Type":"ContainerStarted","Data":"0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.732824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2e34eda2-4099-4d6c-aba3-eb297216a9d5","Type":"ContainerStarted","Data":"45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.734993 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"8b97933a-3f30-4de3-bae4-4c366768a611","Type":"ContainerStarted","Data":"b1288578f0ae4110432533d80fb104ffbd0c9632d53d7de00c39752ec4c188b3"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.735026 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"8b97933a-3f30-4de3-bae4-4c366768a611","Type":"ContainerStarted","Data":"71c5cc4ec96bb324ffcadf9c5051fd2cc90e205b116e803c7a55756a3556105e"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.735039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"8b97933a-3f30-4de3-bae4-4c366768a611","Type":"ContainerStarted","Data":"b8f28126dcf18d9a9e636e0e9ff36f301fb3f1802e12e876cc6383c174f96bb7"} Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.754474 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-2" podStartSLOduration=4.754455661 podStartE2EDuration="4.754455661s" podCreationTimestamp="2026-01-31 07:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:12:04.748970661 +0000 UTC m=+1751.026230236" watchObservedRunningTime="2026-01-31 07:12:04.754455661 +0000 UTC m=+1751.031715246" Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.779735 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-2" podStartSLOduration=4.779710251 podStartE2EDuration="4.779710251s" podCreationTimestamp="2026-01-31 07:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:12:04.776150954 +0000 UTC m=+1751.053410529" watchObservedRunningTime="2026-01-31 07:12:04.779710251 +0000 UTC m=+1751.056969836" Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.806864 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=4.806842923 podStartE2EDuration="4.806842923s" podCreationTimestamp="2026-01-31 07:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:12:04.801278291 +0000 UTC m=+1751.078537866" watchObservedRunningTime="2026-01-31 07:12:04.806842923 +0000 UTC m=+1751.084102498" Jan 31 07:12:04 crc kubenswrapper[4687]: I0131 07:12:04.827222 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=4.82720563 podStartE2EDuration="4.82720563s" podCreationTimestamp="2026-01-31 07:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:12:04.821257498 +0000 UTC m=+1751.098517083" watchObservedRunningTime="2026-01-31 07:12:04.82720563 +0000 UTC m=+1751.104465205" Jan 31 07:12:09 crc kubenswrapper[4687]: I0131 07:12:09.604384 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:12:09 crc kubenswrapper[4687]: E0131 07:12:09.613118 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.232095 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.232463 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.247687 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.247745 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.260777 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.271945 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.284137 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.292342 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.623880 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.623928 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.645200 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.645264 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.655163 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.665160 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.678485 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.689592 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.797052 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.797087 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.797099 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.797108 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.797117 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.797125 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.797134 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:12 crc kubenswrapper[4687]: I0131 07:12:12.797143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.714951 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.717235 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.817351 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.817463 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.822586 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.846990 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.847737 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.941459 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.941563 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:12:14 crc kubenswrapper[4687]: I0131 07:12:14.942892 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:15 crc kubenswrapper[4687]: I0131 07:12:15.000705 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:15 crc kubenswrapper[4687]: I0131 07:12:15.686940 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:12:15 crc kubenswrapper[4687]: I0131 07:12:15.695051 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:12:15 crc kubenswrapper[4687]: I0131 07:12:15.914432 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:12:15 crc kubenswrapper[4687]: I0131 07:12:15.925081 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.827312 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerName="glance-log" containerID="cri-o://0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96" gracePeriod=30 Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.827736 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-log" containerID="cri-o://71c5cc4ec96bb324ffcadf9c5051fd2cc90e205b116e803c7a55756a3556105e" gracePeriod=30 Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.827881 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-log" containerID="cri-o://7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a" gracePeriod=30 Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.828005 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-log" containerID="cri-o://4e3e04e975e0515d9f09bdf8a2d51d118b62270845b72c7e2ac0ea644cc75c0e" gracePeriod=30 Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.828258 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerName="glance-httpd" containerID="cri-o://45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0" gracePeriod=30 Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.828329 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-httpd" containerID="cri-o://b1288578f0ae4110432533d80fb104ffbd0c9632d53d7de00c39752ec4c188b3" gracePeriod=30 Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.828394 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-httpd" containerID="cri-o://c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2" gracePeriod=30 Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.828793 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-httpd" containerID="cri-o://d36f0f73478e857c1f416097a32aa1b15a35a69e0655513cd645c7a2e7d2c402" gracePeriod=30 Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.836710 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.132:9292/healthcheck\": EOF" Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.836904 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.132:9292/healthcheck\": EOF" Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.838247 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.133:9292/healthcheck\": EOF" Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.839635 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.133:9292/healthcheck\": EOF" Jan 31 07:12:16 crc kubenswrapper[4687]: I0131 07:12:16.841085 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.131:9292/healthcheck\": EOF" Jan 31 07:12:17 crc kubenswrapper[4687]: I0131 07:12:17.835983 4687 generic.go:334] "Generic (PLEG): container finished" podID="8b97933a-3f30-4de3-bae4-4c366768a611" containerID="71c5cc4ec96bb324ffcadf9c5051fd2cc90e205b116e803c7a55756a3556105e" exitCode=143 Jan 31 07:12:17 crc kubenswrapper[4687]: I0131 07:12:17.836078 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"8b97933a-3f30-4de3-bae4-4c366768a611","Type":"ContainerDied","Data":"71c5cc4ec96bb324ffcadf9c5051fd2cc90e205b116e803c7a55756a3556105e"} Jan 31 07:12:17 crc kubenswrapper[4687]: I0131 07:12:17.838099 4687 generic.go:334] "Generic (PLEG): container finished" podID="111e167d-4141-4668-acd4-c83e49104f69" containerID="7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a" exitCode=143 Jan 31 07:12:17 crc kubenswrapper[4687]: I0131 07:12:17.838153 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"111e167d-4141-4668-acd4-c83e49104f69","Type":"ContainerDied","Data":"7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a"} Jan 31 07:12:17 crc kubenswrapper[4687]: I0131 07:12:17.840030 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerID="4e3e04e975e0515d9f09bdf8a2d51d118b62270845b72c7e2ac0ea644cc75c0e" exitCode=143 Jan 31 07:12:17 crc kubenswrapper[4687]: I0131 07:12:17.840140 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"4f238ff1-8922-4817-beec-c0cbb84ac763","Type":"ContainerDied","Data":"4e3e04e975e0515d9f09bdf8a2d51d118b62270845b72c7e2ac0ea644cc75c0e"} Jan 31 07:12:17 crc kubenswrapper[4687]: I0131 07:12:17.841682 4687 generic.go:334] "Generic (PLEG): container finished" podID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerID="0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96" exitCode=143 Jan 31 07:12:17 crc kubenswrapper[4687]: I0131 07:12:17.841737 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2e34eda2-4099-4d6c-aba3-eb297216a9d5","Type":"ContainerDied","Data":"0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96"} Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.343998 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409547 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-lib-modules\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409601 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-httpd-run\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409631 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-dev\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409652 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-scripts\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409707 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n45b2\" (UniqueName: \"kubernetes.io/projected/2e34eda2-4099-4d6c-aba3-eb297216a9d5-kube-api-access-n45b2\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409724 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-nvme\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409745 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-run\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409759 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-iscsi\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409779 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-config-data\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409852 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-var-locks-brick\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409871 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-logs\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409888 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-sys\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409929 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.409955 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\" (UID: \"2e34eda2-4099-4d6c-aba3-eb297216a9d5\") " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.410257 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-run" (OuterVolumeSpecName: "run") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.410310 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.410477 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.410516 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.410537 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-sys" (OuterVolumeSpecName: "sys") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.410610 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-dev" (OuterVolumeSpecName: "dev") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.410665 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.410870 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-logs" (OuterVolumeSpecName: "logs") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.411051 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.415654 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.415874 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e34eda2-4099-4d6c-aba3-eb297216a9d5-kube-api-access-n45b2" (OuterVolumeSpecName: "kube-api-access-n45b2") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "kube-api-access-n45b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.425611 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.427563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-scripts" (OuterVolumeSpecName: "scripts") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.459161 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-config-data" (OuterVolumeSpecName: "config-data") pod "2e34eda2-4099-4d6c-aba3-eb297216a9d5" (UID: "2e34eda2-4099-4d6c-aba3-eb297216a9d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512088 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512134 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512148 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512159 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512171 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n45b2\" (UniqueName: \"kubernetes.io/projected/2e34eda2-4099-4d6c-aba3-eb297216a9d5-kube-api-access-n45b2\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512186 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512196 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512206 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512217 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e34eda2-4099-4d6c-aba3-eb297216a9d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512227 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512237 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e34eda2-4099-4d6c-aba3-eb297216a9d5-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512247 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e34eda2-4099-4d6c-aba3-eb297216a9d5-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512288 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.512307 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.536821 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.537034 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.614048 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.614082 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.865899 4687 generic.go:334] "Generic (PLEG): container finished" podID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerID="45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0" exitCode=0 Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.865952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2e34eda2-4099-4d6c-aba3-eb297216a9d5","Type":"ContainerDied","Data":"45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0"} Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.865993 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.866007 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2e34eda2-4099-4d6c-aba3-eb297216a9d5","Type":"ContainerDied","Data":"3ee40a554397638991de5bf4d7dfd7041134a60cdf8b079323e834b803079684"} Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.866030 4687 scope.go:117] "RemoveContainer" containerID="45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.899233 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.903901 4687 scope.go:117] "RemoveContainer" containerID="0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.905456 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.929587 4687 scope.go:117] "RemoveContainer" containerID="45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0" Jan 31 07:12:20 crc kubenswrapper[4687]: E0131 07:12:20.930148 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0\": container with ID starting with 45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0 not found: ID does not exist" containerID="45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.930187 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0"} err="failed to get container status \"45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0\": rpc error: code = NotFound desc = could not find container \"45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0\": container with ID starting with 45e95a29d1f71c8a46cd862ce4710fbb275d28b854fffd1e79329a05adf72df0 not found: ID does not exist" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.930212 4687 scope.go:117] "RemoveContainer" containerID="0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96" Jan 31 07:12:20 crc kubenswrapper[4687]: E0131 07:12:20.930602 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96\": container with ID starting with 0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96 not found: ID does not exist" containerID="0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96" Jan 31 07:12:20 crc kubenswrapper[4687]: I0131 07:12:20.930653 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96"} err="failed to get container status \"0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96\": rpc error: code = NotFound desc = could not find container \"0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96\": container with ID starting with 0bbfa1387dbdc7ba98bbda106b895e5181e75ea58dadcd8448f897b178b59d96 not found: ID does not exist" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.611982 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" path="/var/lib/kubelet/pods/2e34eda2-4099-4d6c-aba3-eb297216a9d5/volumes" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.663433 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732523 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgqvx\" (UniqueName: \"kubernetes.io/projected/111e167d-4141-4668-acd4-c83e49104f69-kube-api-access-tgqvx\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732580 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-run\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732621 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-sys\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732645 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-httpd-run\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732699 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-logs\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732663 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-run" (OuterVolumeSpecName: "run") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732768 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-dev\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732806 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-lib-modules\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732823 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-config-data\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732840 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-iscsi\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732838 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-sys" (OuterVolumeSpecName: "sys") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732872 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-scripts\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732905 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732928 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-var-locks-brick\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732994 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-nvme\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"111e167d-4141-4668-acd4-c83e49104f69\" (UID: \"111e167d-4141-4668-acd4-c83e49104f69\") " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732902 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.732929 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-dev" (OuterVolumeSpecName: "dev") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733085 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-logs" (OuterVolumeSpecName: "logs") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733057 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733298 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733310 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733322 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733330 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733338 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733346 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/111e167d-4141-4668-acd4-c83e49104f69-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733866 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.733891 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.737102 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.737206 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.737517 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-scripts" (OuterVolumeSpecName: "scripts") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.737620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111e167d-4141-4668-acd4-c83e49104f69-kube-api-access-tgqvx" (OuterVolumeSpecName: "kube-api-access-tgqvx") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "kube-api-access-tgqvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.770246 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-config-data" (OuterVolumeSpecName: "config-data") pod "111e167d-4141-4668-acd4-c83e49104f69" (UID: "111e167d-4141-4668-acd4-c83e49104f69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.835153 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.835183 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.835192 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/111e167d-4141-4668-acd4-c83e49104f69-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.835223 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.835237 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.835250 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/111e167d-4141-4668-acd4-c83e49104f69-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.835264 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.835274 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgqvx\" (UniqueName: \"kubernetes.io/projected/111e167d-4141-4668-acd4-c83e49104f69-kube-api-access-tgqvx\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.849584 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.850600 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.875358 4687 generic.go:334] "Generic (PLEG): container finished" podID="8b97933a-3f30-4de3-bae4-4c366768a611" containerID="b1288578f0ae4110432533d80fb104ffbd0c9632d53d7de00c39752ec4c188b3" exitCode=0 Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.875426 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"8b97933a-3f30-4de3-bae4-4c366768a611","Type":"ContainerDied","Data":"b1288578f0ae4110432533d80fb104ffbd0c9632d53d7de00c39752ec4c188b3"} Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.876779 4687 generic.go:334] "Generic (PLEG): container finished" podID="111e167d-4141-4668-acd4-c83e49104f69" containerID="c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2" exitCode=0 Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.876823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"111e167d-4141-4668-acd4-c83e49104f69","Type":"ContainerDied","Data":"c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2"} Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.876840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"111e167d-4141-4668-acd4-c83e49104f69","Type":"ContainerDied","Data":"b93a07f3a7bf4e26218971cc2ef2031c07589c30024c297961e5acb61a76e9d9"} Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.876859 4687 scope.go:117] "RemoveContainer" containerID="c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.876954 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.892684 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerID="d36f0f73478e857c1f416097a32aa1b15a35a69e0655513cd645c7a2e7d2c402" exitCode=0 Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.892758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"4f238ff1-8922-4817-beec-c0cbb84ac763","Type":"ContainerDied","Data":"d36f0f73478e857c1f416097a32aa1b15a35a69e0655513cd645c7a2e7d2c402"} Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.918377 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.924135 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.934212 4687 scope.go:117] "RemoveContainer" containerID="7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.939264 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.939299 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.963030 4687 scope.go:117] "RemoveContainer" containerID="c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2" Jan 31 07:12:21 crc kubenswrapper[4687]: E0131 07:12:21.964457 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2\": container with ID starting with c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2 not found: ID does not exist" containerID="c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.964520 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2"} err="failed to get container status \"c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2\": rpc error: code = NotFound desc = could not find container \"c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2\": container with ID starting with c10267287b91d3fb5f2523f81329f18d9b3eb4ee7052a7b8d0b4c92b8842a7c2 not found: ID does not exist" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.964548 4687 scope.go:117] "RemoveContainer" containerID="7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a" Jan 31 07:12:21 crc kubenswrapper[4687]: E0131 07:12:21.964950 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a\": container with ID starting with 7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a not found: ID does not exist" containerID="7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a" Jan 31 07:12:21 crc kubenswrapper[4687]: I0131 07:12:21.964995 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a"} err="failed to get container status \"7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a\": rpc error: code = NotFound desc = could not find container \"7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a\": container with ID starting with 7c4f515138d3757e91295370c333947e4a522b0f1f231576e31e9d7824924d8a not found: ID does not exist" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.026152 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.149884 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152351 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-var-locks-brick\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152445 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-run\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152558 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152620 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152658 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgcxr\" (UniqueName: \"kubernetes.io/projected/8b97933a-3f30-4de3-bae4-4c366768a611-kube-api-access-cgcxr\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152687 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-config-data\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152721 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-scripts\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152745 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-lib-modules\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152766 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-nvme\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152807 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-httpd-run\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152836 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-iscsi\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152909 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-dev\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.152982 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-logs\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.153012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-sys\") pod \"8b97933a-3f30-4de3-bae4-4c366768a611\" (UID: \"8b97933a-3f30-4de3-bae4-4c366768a611\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.153389 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-sys" (OuterVolumeSpecName: "sys") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.153458 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.153486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-run" (OuterVolumeSpecName: "run") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.155110 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.155118 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.155129 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-dev" (OuterVolumeSpecName: "dev") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.155460 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.159461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-scripts" (OuterVolumeSpecName: "scripts") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.161538 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.161916 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b97933a-3f30-4de3-bae4-4c366768a611-kube-api-access-cgcxr" (OuterVolumeSpecName: "kube-api-access-cgcxr") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "kube-api-access-cgcxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.162269 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.162558 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage19-crc" (OuterVolumeSpecName: "glance-cache") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "local-storage19-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.162617 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-logs" (OuterVolumeSpecName: "logs") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.199257 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-config-data" (OuterVolumeSpecName: "config-data") pod "8b97933a-3f30-4de3-bae4-4c366768a611" (UID: "8b97933a-3f30-4de3-bae4-4c366768a611"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254278 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmsdj\" (UniqueName: \"kubernetes.io/projected/4f238ff1-8922-4817-beec-c0cbb84ac763-kube-api-access-hmsdj\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254322 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-dev\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254347 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-iscsi\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254367 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-nvme\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254381 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-sys\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254395 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-scripts\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254454 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-var-locks-brick\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254473 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-config-data\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254525 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254571 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254614 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-lib-modules\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254652 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-logs\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254674 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-httpd-run\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254705 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-run\") pod \"4f238ff1-8922-4817-beec-c0cbb84ac763\" (UID: \"4f238ff1-8922-4817-beec-c0cbb84ac763\") " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254938 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.254961 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255016 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255051 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-sys" (OuterVolumeSpecName: "sys") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255335 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-dev" (OuterVolumeSpecName: "dev") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255369 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255430 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-run" (OuterVolumeSpecName: "run") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255431 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255592 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-logs" (OuterVolumeSpecName: "logs") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255853 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255882 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255896 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255923 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255936 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255952 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") on node \"crc\" " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255964 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255979 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgcxr\" (UniqueName: \"kubernetes.io/projected/8b97933a-3f30-4de3-bae4-4c366768a611-kube-api-access-cgcxr\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.255991 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f238ff1-8922-4817-beec-c0cbb84ac763-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256003 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256013 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b97933a-3f30-4de3-bae4-4c366768a611-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256025 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256035 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256046 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256057 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256067 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256077 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256088 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b97933a-3f30-4de3-bae4-4c366768a611-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256099 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256111 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256135 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256148 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4f238ff1-8922-4817-beec-c0cbb84ac763-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.256160 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b97933a-3f30-4de3-bae4-4c366768a611-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.258872 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.267051 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f238ff1-8922-4817-beec-c0cbb84ac763-kube-api-access-hmsdj" (OuterVolumeSpecName: "kube-api-access-hmsdj") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "kube-api-access-hmsdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.270470 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-scripts" (OuterVolumeSpecName: "scripts") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.270486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance-cache") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.271884 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.274083 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage19-crc" (UniqueName: "kubernetes.io/local-volume/local-storage19-crc") on node "crc" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.294524 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-config-data" (OuterVolumeSpecName: "config-data") pod "4f238ff1-8922-4817-beec-c0cbb84ac763" (UID: "4f238ff1-8922-4817-beec-c0cbb84ac763"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.357897 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.357939 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.357954 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmsdj\" (UniqueName: \"kubernetes.io/projected/4f238ff1-8922-4817-beec-c0cbb84ac763-kube-api-access-hmsdj\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.357966 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.357980 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f238ff1-8922-4817-beec-c0cbb84ac763-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.358021 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.358038 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.371773 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.372790 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.458946 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.458985 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.909711 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"8b97933a-3f30-4de3-bae4-4c366768a611","Type":"ContainerDied","Data":"b8f28126dcf18d9a9e636e0e9ff36f301fb3f1802e12e876cc6383c174f96bb7"} Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.909829 4687 scope.go:117] "RemoveContainer" containerID="b1288578f0ae4110432533d80fb104ffbd0c9632d53d7de00c39752ec4c188b3" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.910206 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.917167 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"4f238ff1-8922-4817-beec-c0cbb84ac763","Type":"ContainerDied","Data":"1917e0f451138f3439bf610ec821db0f118fbe1cb80e7b9c16f7e84c79632323"} Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.917331 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.933627 4687 scope.go:117] "RemoveContainer" containerID="71c5cc4ec96bb324ffcadf9c5051fd2cc90e205b116e803c7a55756a3556105e" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.953607 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.968322 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.990531 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.991187 4687 scope.go:117] "RemoveContainer" containerID="d36f0f73478e857c1f416097a32aa1b15a35a69e0655513cd645c7a2e7d2c402" Jan 31 07:12:22 crc kubenswrapper[4687]: I0131 07:12:22.996205 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:12:23 crc kubenswrapper[4687]: I0131 07:12:23.031177 4687 scope.go:117] "RemoveContainer" containerID="4e3e04e975e0515d9f09bdf8a2d51d118b62270845b72c7e2ac0ea644cc75c0e" Jan 31 07:12:23 crc kubenswrapper[4687]: I0131 07:12:23.603948 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:12:23 crc kubenswrapper[4687]: E0131 07:12:23.604484 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:12:23 crc kubenswrapper[4687]: I0131 07:12:23.614263 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111e167d-4141-4668-acd4-c83e49104f69" path="/var/lib/kubelet/pods/111e167d-4141-4668-acd4-c83e49104f69/volumes" Jan 31 07:12:23 crc kubenswrapper[4687]: I0131 07:12:23.615161 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" path="/var/lib/kubelet/pods/4f238ff1-8922-4817-beec-c0cbb84ac763/volumes" Jan 31 07:12:23 crc kubenswrapper[4687]: I0131 07:12:23.616015 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" path="/var/lib/kubelet/pods/8b97933a-3f30-4de3-bae4-4c366768a611/volumes" Jan 31 07:12:23 crc kubenswrapper[4687]: I0131 07:12:23.812366 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:12:23 crc kubenswrapper[4687]: I0131 07:12:23.812991 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" containerName="glance-log" containerID="cri-o://36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd" gracePeriod=30 Jan 31 07:12:23 crc kubenswrapper[4687]: I0131 07:12:23.813050 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" containerName="glance-httpd" containerID="cri-o://4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683" gracePeriod=30 Jan 31 07:12:24 crc kubenswrapper[4687]: I0131 07:12:24.161483 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:12:24 crc kubenswrapper[4687]: I0131 07:12:24.161715 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerName="glance-log" containerID="cri-o://363c2e7238686b625de2a6c914c64758028211989df4ff170d72f3eccf89e90e" gracePeriod=30 Jan 31 07:12:24 crc kubenswrapper[4687]: I0131 07:12:24.161839 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerName="glance-httpd" containerID="cri-o://d3ff90f4d8350a12b77e89e6ce885e9bee193d32d846dbd3ac2299f7a34ef444" gracePeriod=30 Jan 31 07:12:24 crc kubenswrapper[4687]: I0131 07:12:24.941066 4687 generic.go:334] "Generic (PLEG): container finished" podID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerID="363c2e7238686b625de2a6c914c64758028211989df4ff170d72f3eccf89e90e" exitCode=143 Jan 31 07:12:24 crc kubenswrapper[4687]: I0131 07:12:24.941180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c13f92f0-6f82-491f-8e93-f2805292edf9","Type":"ContainerDied","Data":"363c2e7238686b625de2a6c914c64758028211989df4ff170d72f3eccf89e90e"} Jan 31 07:12:24 crc kubenswrapper[4687]: I0131 07:12:24.944771 4687 generic.go:334] "Generic (PLEG): container finished" podID="dd530881-31d1-4d14-a877-2826adf94b2c" containerID="36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd" exitCode=143 Jan 31 07:12:24 crc kubenswrapper[4687]: I0131 07:12:24.944809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"dd530881-31d1-4d14-a877-2826adf94b2c","Type":"ContainerDied","Data":"36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd"} Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.844362 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942736 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-run\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942789 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942809 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942835 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-scripts\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942859 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-httpd-run\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942844 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-run" (OuterVolumeSpecName: "run") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942889 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-iscsi\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942908 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-lib-modules\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942924 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-nvme\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942942 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-sys\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.942964 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbh96\" (UniqueName: \"kubernetes.io/projected/dd530881-31d1-4d14-a877-2826adf94b2c-kube-api-access-cbh96\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943016 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-dev\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943039 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-var-locks-brick\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943066 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-config-data\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943107 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-logs\") pod \"dd530881-31d1-4d14-a877-2826adf94b2c\" (UID: \"dd530881-31d1-4d14-a877-2826adf94b2c\") " Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943378 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943724 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-sys" (OuterVolumeSpecName: "sys") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-logs" (OuterVolumeSpecName: "logs") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943774 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943802 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943839 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943845 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-dev" (OuterVolumeSpecName: "dev") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.943856 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.944077 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.948978 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.959642 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-scripts" (OuterVolumeSpecName: "scripts") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.959659 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage20-crc" (OuterVolumeSpecName: "glance-cache") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "local-storage20-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.959713 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd530881-31d1-4d14-a877-2826adf94b2c-kube-api-access-cbh96" (OuterVolumeSpecName: "kube-api-access-cbh96") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "kube-api-access-cbh96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.976786 4687 generic.go:334] "Generic (PLEG): container finished" podID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerID="d3ff90f4d8350a12b77e89e6ce885e9bee193d32d846dbd3ac2299f7a34ef444" exitCode=0 Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.976834 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c13f92f0-6f82-491f-8e93-f2805292edf9","Type":"ContainerDied","Data":"d3ff90f4d8350a12b77e89e6ce885e9bee193d32d846dbd3ac2299f7a34ef444"} Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.978972 4687 generic.go:334] "Generic (PLEG): container finished" podID="dd530881-31d1-4d14-a877-2826adf94b2c" containerID="4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683" exitCode=0 Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.979016 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"dd530881-31d1-4d14-a877-2826adf94b2c","Type":"ContainerDied","Data":"4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683"} Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.979040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"dd530881-31d1-4d14-a877-2826adf94b2c","Type":"ContainerDied","Data":"c18370cf3e22253905b8b08ac5985c2dd80647305231a9855f29ffb3299bb81b"} Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.979060 4687 scope.go:117] "RemoveContainer" containerID="4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.979166 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:12:27 crc kubenswrapper[4687]: I0131 07:12:27.986199 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-config-data" (OuterVolumeSpecName: "config-data") pod "dd530881-31d1-4d14-a877-2826adf94b2c" (UID: "dd530881-31d1-4d14-a877-2826adf94b2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044522 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044564 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044576 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044588 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044626 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044644 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044657 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd530881-31d1-4d14-a877-2826adf94b2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044669 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd530881-31d1-4d14-a877-2826adf94b2c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044679 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044688 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044699 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044709 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd530881-31d1-4d14-a877-2826adf94b2c-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.044721 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbh96\" (UniqueName: \"kubernetes.io/projected/dd530881-31d1-4d14-a877-2826adf94b2c-kube-api-access-cbh96\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.049880 4687 scope.go:117] "RemoveContainer" containerID="36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.059480 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage20-crc" (UniqueName: "kubernetes.io/local-volume/local-storage20-crc") on node "crc" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.059825 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.072251 4687 scope.go:117] "RemoveContainer" containerID="4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683" Jan 31 07:12:28 crc kubenswrapper[4687]: E0131 07:12:28.072747 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683\": container with ID starting with 4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683 not found: ID does not exist" containerID="4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.072787 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683"} err="failed to get container status \"4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683\": rpc error: code = NotFound desc = could not find container \"4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683\": container with ID starting with 4734c47c1c9ec9f58f0b9b5f83c084ac5a7ffc4b567f64e61b50adb66c45e683 not found: ID does not exist" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.072813 4687 scope.go:117] "RemoveContainer" containerID="36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd" Jan 31 07:12:28 crc kubenswrapper[4687]: E0131 07:12:28.073021 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd\": container with ID starting with 36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd not found: ID does not exist" containerID="36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.073047 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd"} err="failed to get container status \"36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd\": rpc error: code = NotFound desc = could not find container \"36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd\": container with ID starting with 36bbcff6823aeac4b38d47464c175ec79581fd069e97ee7a3a36f1831f01ebdd not found: ID does not exist" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.146239 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.146268 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.180201 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247114 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-config-data\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247181 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247215 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-nvme\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247259 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-scripts\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247282 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-lib-modules\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247305 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-dev\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247332 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-var-locks-brick\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247378 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-iscsi\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247440 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvfs9\" (UniqueName: \"kubernetes.io/projected/c13f92f0-6f82-491f-8e93-f2805292edf9-kube-api-access-pvfs9\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247796 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-httpd-run\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247851 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-logs\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247893 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247947 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-run\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.247970 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-sys\") pod \"c13f92f0-6f82-491f-8e93-f2805292edf9\" (UID: \"c13f92f0-6f82-491f-8e93-f2805292edf9\") " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.248448 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-sys" (OuterVolumeSpecName: "sys") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.248510 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.249553 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-dev" (OuterVolumeSpecName: "dev") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.249596 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.249609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.249627 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.249885 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-run" (OuterVolumeSpecName: "run") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.250037 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-logs" (OuterVolumeSpecName: "logs") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.250094 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.252632 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance-cache") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.253960 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-scripts" (OuterVolumeSpecName: "scripts") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.254508 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.254651 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13f92f0-6f82-491f-8e93-f2805292edf9-kube-api-access-pvfs9" (OuterVolumeSpecName: "kube-api-access-pvfs9") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "kube-api-access-pvfs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.280503 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-config-data" (OuterVolumeSpecName: "config-data") pod "c13f92f0-6f82-491f-8e93-f2805292edf9" (UID: "c13f92f0-6f82-491f-8e93-f2805292edf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.325829 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.334609 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350000 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350035 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350047 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350056 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350064 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350072 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350082 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350091 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvfs9\" (UniqueName: \"kubernetes.io/projected/c13f92f0-6f82-491f-8e93-f2805292edf9-kube-api-access-pvfs9\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350100 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350110 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c13f92f0-6f82-491f-8e93-f2805292edf9-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350124 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350134 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350142 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c13f92f0-6f82-491f-8e93-f2805292edf9-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.350149 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c13f92f0-6f82-491f-8e93-f2805292edf9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.362968 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.363887 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.451967 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.452018 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.989328 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"c13f92f0-6f82-491f-8e93-f2805292edf9","Type":"ContainerDied","Data":"152dd6728465edc4ab3d791db475c7d12beb43d75b0f9c3b4bb1743bf743e164"} Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.989375 4687 scope.go:117] "RemoveContainer" containerID="d3ff90f4d8350a12b77e89e6ce885e9bee193d32d846dbd3ac2299f7a34ef444" Jan 31 07:12:28 crc kubenswrapper[4687]: I0131 07:12:28.989483 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.014587 4687 scope.go:117] "RemoveContainer" containerID="363c2e7238686b625de2a6c914c64758028211989df4ff170d72f3eccf89e90e" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.024333 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.031134 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.510686 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-7xmk6"] Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.516371 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-7xmk6"] Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598056 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancec94f-account-delete-88gc5"] Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598472 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598489 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598501 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598508 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598523 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598529 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598538 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598545 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598558 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598564 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598576 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598582 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598592 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598599 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598609 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598615 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598626 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598631 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598641 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598646 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598654 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598659 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: E0131 07:12:29.598669 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598675 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598784 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598795 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598807 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598819 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598829 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598835 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598842 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598850 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e34eda2-4099-4d6c-aba3-eb297216a9d5" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598858 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="111e167d-4141-4668-acd4-c83e49104f69" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598866 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-httpd" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598873 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f238ff1-8922-4817-beec-c0cbb84ac763" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.598882 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b97933a-3f30-4de3-bae4-4c366768a611" containerName="glance-log" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.599345 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.651289 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c6086f-95b9-43e6-94bc-b8bb8a35fa6d" path="/var/lib/kubelet/pods/19c6086f-95b9-43e6-94bc-b8bb8a35fa6d/volumes" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.652350 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c13f92f0-6f82-491f-8e93-f2805292edf9" path="/var/lib/kubelet/pods/c13f92f0-6f82-491f-8e93-f2805292edf9/volumes" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.653483 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd530881-31d1-4d14-a877-2826adf94b2c" path="/var/lib/kubelet/pods/dd530881-31d1-4d14-a877-2826adf94b2c/volumes" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.659730 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancec94f-account-delete-88gc5"] Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.671383 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c77e04c-a50a-4f81-83f2-ca94332e791e-operator-scripts\") pod \"glancec94f-account-delete-88gc5\" (UID: \"3c77e04c-a50a-4f81-83f2-ca94332e791e\") " pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:29 crc kubenswrapper[4687]: I0131 07:12:29.675029 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrds\" (UniqueName: \"kubernetes.io/projected/3c77e04c-a50a-4f81-83f2-ca94332e791e-kube-api-access-jmrds\") pod \"glancec94f-account-delete-88gc5\" (UID: \"3c77e04c-a50a-4f81-83f2-ca94332e791e\") " pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:30 crc kubenswrapper[4687]: I0131 07:12:29.777566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmrds\" (UniqueName: \"kubernetes.io/projected/3c77e04c-a50a-4f81-83f2-ca94332e791e-kube-api-access-jmrds\") pod \"glancec94f-account-delete-88gc5\" (UID: \"3c77e04c-a50a-4f81-83f2-ca94332e791e\") " pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:30 crc kubenswrapper[4687]: I0131 07:12:29.777634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c77e04c-a50a-4f81-83f2-ca94332e791e-operator-scripts\") pod \"glancec94f-account-delete-88gc5\" (UID: \"3c77e04c-a50a-4f81-83f2-ca94332e791e\") " pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:30 crc kubenswrapper[4687]: I0131 07:12:29.778508 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c77e04c-a50a-4f81-83f2-ca94332e791e-operator-scripts\") pod \"glancec94f-account-delete-88gc5\" (UID: \"3c77e04c-a50a-4f81-83f2-ca94332e791e\") " pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:30 crc kubenswrapper[4687]: I0131 07:12:29.796674 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmrds\" (UniqueName: \"kubernetes.io/projected/3c77e04c-a50a-4f81-83f2-ca94332e791e-kube-api-access-jmrds\") pod \"glancec94f-account-delete-88gc5\" (UID: \"3c77e04c-a50a-4f81-83f2-ca94332e791e\") " pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:30 crc kubenswrapper[4687]: I0131 07:12:29.949863 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:30 crc kubenswrapper[4687]: I0131 07:12:30.680072 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancec94f-account-delete-88gc5"] Jan 31 07:12:31 crc kubenswrapper[4687]: I0131 07:12:31.013426 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" event={"ID":"3c77e04c-a50a-4f81-83f2-ca94332e791e","Type":"ContainerStarted","Data":"0afef5cdc06c693bb8942968a397546f2ad5966a3405887ec65baf694f4987e6"} Jan 31 07:12:31 crc kubenswrapper[4687]: I0131 07:12:31.013493 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" event={"ID":"3c77e04c-a50a-4f81-83f2-ca94332e791e","Type":"ContainerStarted","Data":"29940e4365dd15a0912a84e4aca2af2f4d7a54d73239a3af88d87870a11d9b03"} Jan 31 07:12:32 crc kubenswrapper[4687]: I0131 07:12:32.041178 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" podStartSLOduration=3.041151829 podStartE2EDuration="3.041151829s" podCreationTimestamp="2026-01-31 07:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:12:32.033075908 +0000 UTC m=+1778.310335493" watchObservedRunningTime="2026-01-31 07:12:32.041151829 +0000 UTC m=+1778.318411404" Jan 31 07:12:33 crc kubenswrapper[4687]: I0131 07:12:33.027798 4687 generic.go:334] "Generic (PLEG): container finished" podID="3c77e04c-a50a-4f81-83f2-ca94332e791e" containerID="0afef5cdc06c693bb8942968a397546f2ad5966a3405887ec65baf694f4987e6" exitCode=0 Jan 31 07:12:33 crc kubenswrapper[4687]: I0131 07:12:33.027838 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" event={"ID":"3c77e04c-a50a-4f81-83f2-ca94332e791e","Type":"ContainerDied","Data":"0afef5cdc06c693bb8942968a397546f2ad5966a3405887ec65baf694f4987e6"} Jan 31 07:12:34 crc kubenswrapper[4687]: I0131 07:12:34.316783 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:34 crc kubenswrapper[4687]: I0131 07:12:34.456919 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmrds\" (UniqueName: \"kubernetes.io/projected/3c77e04c-a50a-4f81-83f2-ca94332e791e-kube-api-access-jmrds\") pod \"3c77e04c-a50a-4f81-83f2-ca94332e791e\" (UID: \"3c77e04c-a50a-4f81-83f2-ca94332e791e\") " Jan 31 07:12:34 crc kubenswrapper[4687]: I0131 07:12:34.456994 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c77e04c-a50a-4f81-83f2-ca94332e791e-operator-scripts\") pod \"3c77e04c-a50a-4f81-83f2-ca94332e791e\" (UID: \"3c77e04c-a50a-4f81-83f2-ca94332e791e\") " Jan 31 07:12:34 crc kubenswrapper[4687]: I0131 07:12:34.457830 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c77e04c-a50a-4f81-83f2-ca94332e791e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c77e04c-a50a-4f81-83f2-ca94332e791e" (UID: "3c77e04c-a50a-4f81-83f2-ca94332e791e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:12:34 crc kubenswrapper[4687]: I0131 07:12:34.522707 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c77e04c-a50a-4f81-83f2-ca94332e791e-kube-api-access-jmrds" (OuterVolumeSpecName: "kube-api-access-jmrds") pod "3c77e04c-a50a-4f81-83f2-ca94332e791e" (UID: "3c77e04c-a50a-4f81-83f2-ca94332e791e"). InnerVolumeSpecName "kube-api-access-jmrds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:34 crc kubenswrapper[4687]: I0131 07:12:34.559618 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmrds\" (UniqueName: \"kubernetes.io/projected/3c77e04c-a50a-4f81-83f2-ca94332e791e-kube-api-access-jmrds\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:34 crc kubenswrapper[4687]: I0131 07:12:34.560005 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c77e04c-a50a-4f81-83f2-ca94332e791e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:35 crc kubenswrapper[4687]: I0131 07:12:35.043073 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" event={"ID":"3c77e04c-a50a-4f81-83f2-ca94332e791e","Type":"ContainerDied","Data":"29940e4365dd15a0912a84e4aca2af2f4d7a54d73239a3af88d87870a11d9b03"} Jan 31 07:12:35 crc kubenswrapper[4687]: I0131 07:12:35.043115 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29940e4365dd15a0912a84e4aca2af2f4d7a54d73239a3af88d87870a11d9b03" Jan 31 07:12:35 crc kubenswrapper[4687]: I0131 07:12:35.043116 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancec94f-account-delete-88gc5" Jan 31 07:12:38 crc kubenswrapper[4687]: I0131 07:12:38.603233 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:12:38 crc kubenswrapper[4687]: E0131 07:12:38.603869 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.633879 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-pjkn6"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.642179 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-pjkn6"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.654014 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancec94f-account-delete-88gc5"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.662178 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancec94f-account-delete-88gc5"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.669395 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-c94f-account-create-update-f8wfc"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.674431 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-c94f-account-create-update-f8wfc"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.710518 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-6b297"] Jan 31 07:12:39 crc kubenswrapper[4687]: E0131 07:12:39.710794 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c77e04c-a50a-4f81-83f2-ca94332e791e" containerName="mariadb-account-delete" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.710811 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c77e04c-a50a-4f81-83f2-ca94332e791e" containerName="mariadb-account-delete" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.710970 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c77e04c-a50a-4f81-83f2-ca94332e791e" containerName="mariadb-account-delete" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.711614 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.719346 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-6b297"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.821860 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-665d-account-create-update-bp9g4"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.823056 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.832006 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.840245 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-665d-account-create-update-bp9g4"] Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.841864 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgzxr\" (UniqueName: \"kubernetes.io/projected/8189cedc-a578-41ea-89ea-75af7e188168-kube-api-access-rgzxr\") pod \"glance-db-create-6b297\" (UID: \"8189cedc-a578-41ea-89ea-75af7e188168\") " pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.841954 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189cedc-a578-41ea-89ea-75af7e188168-operator-scripts\") pod \"glance-db-create-6b297\" (UID: \"8189cedc-a578-41ea-89ea-75af7e188168\") " pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.943040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8d26\" (UniqueName: \"kubernetes.io/projected/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-kube-api-access-h8d26\") pod \"glance-665d-account-create-update-bp9g4\" (UID: \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\") " pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.943085 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgzxr\" (UniqueName: \"kubernetes.io/projected/8189cedc-a578-41ea-89ea-75af7e188168-kube-api-access-rgzxr\") pod \"glance-db-create-6b297\" (UID: \"8189cedc-a578-41ea-89ea-75af7e188168\") " pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.943140 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189cedc-a578-41ea-89ea-75af7e188168-operator-scripts\") pod \"glance-db-create-6b297\" (UID: \"8189cedc-a578-41ea-89ea-75af7e188168\") " pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.943158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-operator-scripts\") pod \"glance-665d-account-create-update-bp9g4\" (UID: \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\") " pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.943808 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189cedc-a578-41ea-89ea-75af7e188168-operator-scripts\") pod \"glance-db-create-6b297\" (UID: \"8189cedc-a578-41ea-89ea-75af7e188168\") " pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:39 crc kubenswrapper[4687]: I0131 07:12:39.961919 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgzxr\" (UniqueName: \"kubernetes.io/projected/8189cedc-a578-41ea-89ea-75af7e188168-kube-api-access-rgzxr\") pod \"glance-db-create-6b297\" (UID: \"8189cedc-a578-41ea-89ea-75af7e188168\") " pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:40 crc kubenswrapper[4687]: I0131 07:12:40.038251 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:40 crc kubenswrapper[4687]: I0131 07:12:40.044575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8d26\" (UniqueName: \"kubernetes.io/projected/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-kube-api-access-h8d26\") pod \"glance-665d-account-create-update-bp9g4\" (UID: \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\") " pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:40 crc kubenswrapper[4687]: I0131 07:12:40.044657 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-operator-scripts\") pod \"glance-665d-account-create-update-bp9g4\" (UID: \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\") " pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:40 crc kubenswrapper[4687]: I0131 07:12:40.045525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-operator-scripts\") pod \"glance-665d-account-create-update-bp9g4\" (UID: \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\") " pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:40 crc kubenswrapper[4687]: I0131 07:12:40.063560 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8d26\" (UniqueName: \"kubernetes.io/projected/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-kube-api-access-h8d26\") pod \"glance-665d-account-create-update-bp9g4\" (UID: \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\") " pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:40 crc kubenswrapper[4687]: I0131 07:12:40.140575 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:40 crc kubenswrapper[4687]: I0131 07:12:40.257912 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-6b297"] Jan 31 07:12:40 crc kubenswrapper[4687]: I0131 07:12:40.569478 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-665d-account-create-update-bp9g4"] Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.089992 4687 generic.go:334] "Generic (PLEG): container finished" podID="8189cedc-a578-41ea-89ea-75af7e188168" containerID="ded8f27288ed169650289e3a12e1b2609f051b47914652f26c02f2b572b7ec86" exitCode=0 Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.090078 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-6b297" event={"ID":"8189cedc-a578-41ea-89ea-75af7e188168","Type":"ContainerDied","Data":"ded8f27288ed169650289e3a12e1b2609f051b47914652f26c02f2b572b7ec86"} Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.090113 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-6b297" event={"ID":"8189cedc-a578-41ea-89ea-75af7e188168","Type":"ContainerStarted","Data":"a2df952328ae27c66598fc7a7e35c611a119bfd6d2c7f11e7bf204e86b68ca3d"} Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.091946 4687 generic.go:334] "Generic (PLEG): container finished" podID="b2cbc54d-d182-4b8f-8e1d-a63b109bb41f" containerID="07d6af67966269759eb0651e4762fdadb801c44d4243ce89de96056707307363" exitCode=0 Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.092004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" event={"ID":"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f","Type":"ContainerDied","Data":"07d6af67966269759eb0651e4762fdadb801c44d4243ce89de96056707307363"} Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.092034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" event={"ID":"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f","Type":"ContainerStarted","Data":"4e9663e840323a9886b713ae6f221e5c05daa14e8e7b25822103f23c275376f5"} Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.612297 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c77e04c-a50a-4f81-83f2-ca94332e791e" path="/var/lib/kubelet/pods/3c77e04c-a50a-4f81-83f2-ca94332e791e/volumes" Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.613063 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6324d3-a530-4ce5-b00a-6f77fd585509" path="/var/lib/kubelet/pods/6f6324d3-a530-4ce5-b00a-6f77fd585509/volumes" Jan 31 07:12:41 crc kubenswrapper[4687]: I0131 07:12:41.613562 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5f32b0-dd27-4c63-91a6-a63cb5bf5452" path="/var/lib/kubelet/pods/ad5f32b0-dd27-4c63-91a6-a63cb5bf5452/volumes" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.474073 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.480725 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.589027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-operator-scripts\") pod \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\" (UID: \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\") " Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.589073 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189cedc-a578-41ea-89ea-75af7e188168-operator-scripts\") pod \"8189cedc-a578-41ea-89ea-75af7e188168\" (UID: \"8189cedc-a578-41ea-89ea-75af7e188168\") " Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.589144 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8d26\" (UniqueName: \"kubernetes.io/projected/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-kube-api-access-h8d26\") pod \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\" (UID: \"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f\") " Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.589219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgzxr\" (UniqueName: \"kubernetes.io/projected/8189cedc-a578-41ea-89ea-75af7e188168-kube-api-access-rgzxr\") pod \"8189cedc-a578-41ea-89ea-75af7e188168\" (UID: \"8189cedc-a578-41ea-89ea-75af7e188168\") " Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.590106 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8189cedc-a578-41ea-89ea-75af7e188168-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8189cedc-a578-41ea-89ea-75af7e188168" (UID: "8189cedc-a578-41ea-89ea-75af7e188168"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.590174 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2cbc54d-d182-4b8f-8e1d-a63b109bb41f" (UID: "b2cbc54d-d182-4b8f-8e1d-a63b109bb41f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.594630 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8189cedc-a578-41ea-89ea-75af7e188168-kube-api-access-rgzxr" (OuterVolumeSpecName: "kube-api-access-rgzxr") pod "8189cedc-a578-41ea-89ea-75af7e188168" (UID: "8189cedc-a578-41ea-89ea-75af7e188168"). InnerVolumeSpecName "kube-api-access-rgzxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.594690 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-kube-api-access-h8d26" (OuterVolumeSpecName: "kube-api-access-h8d26") pod "b2cbc54d-d182-4b8f-8e1d-a63b109bb41f" (UID: "b2cbc54d-d182-4b8f-8e1d-a63b109bb41f"). InnerVolumeSpecName "kube-api-access-h8d26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.690733 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgzxr\" (UniqueName: \"kubernetes.io/projected/8189cedc-a578-41ea-89ea-75af7e188168-kube-api-access-rgzxr\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.690770 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.690779 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189cedc-a578-41ea-89ea-75af7e188168-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:42 crc kubenswrapper[4687]: I0131 07:12:42.690788 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8d26\" (UniqueName: \"kubernetes.io/projected/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f-kube-api-access-h8d26\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:43 crc kubenswrapper[4687]: I0131 07:12:43.117373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" event={"ID":"b2cbc54d-d182-4b8f-8e1d-a63b109bb41f","Type":"ContainerDied","Data":"4e9663e840323a9886b713ae6f221e5c05daa14e8e7b25822103f23c275376f5"} Jan 31 07:12:43 crc kubenswrapper[4687]: I0131 07:12:43.117435 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9663e840323a9886b713ae6f221e5c05daa14e8e7b25822103f23c275376f5" Jan 31 07:12:43 crc kubenswrapper[4687]: I0131 07:12:43.117530 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-665d-account-create-update-bp9g4" Jan 31 07:12:43 crc kubenswrapper[4687]: I0131 07:12:43.118644 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-6b297" event={"ID":"8189cedc-a578-41ea-89ea-75af7e188168","Type":"ContainerDied","Data":"a2df952328ae27c66598fc7a7e35c611a119bfd6d2c7f11e7bf204e86b68ca3d"} Jan 31 07:12:43 crc kubenswrapper[4687]: I0131 07:12:43.118673 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2df952328ae27c66598fc7a7e35c611a119bfd6d2c7f11e7bf204e86b68ca3d" Jan 31 07:12:43 crc kubenswrapper[4687]: I0131 07:12:43.118728 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-6b297" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.970238 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-7zqqv"] Jan 31 07:12:44 crc kubenswrapper[4687]: E0131 07:12:44.970811 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8189cedc-a578-41ea-89ea-75af7e188168" containerName="mariadb-database-create" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.970826 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8189cedc-a578-41ea-89ea-75af7e188168" containerName="mariadb-database-create" Jan 31 07:12:44 crc kubenswrapper[4687]: E0131 07:12:44.970841 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2cbc54d-d182-4b8f-8e1d-a63b109bb41f" containerName="mariadb-account-create-update" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.970848 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2cbc54d-d182-4b8f-8e1d-a63b109bb41f" containerName="mariadb-account-create-update" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.971009 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2cbc54d-d182-4b8f-8e1d-a63b109bb41f" containerName="mariadb-account-create-update" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.971027 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8189cedc-a578-41ea-89ea-75af7e188168" containerName="mariadb-database-create" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.971538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.973841 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.974197 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-md9c8" Jan 31 07:12:44 crc kubenswrapper[4687]: I0131 07:12:44.979467 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-7zqqv"] Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.126214 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-config-data\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.126335 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-db-sync-config-data\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.126373 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q429\" (UniqueName: \"kubernetes.io/projected/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-kube-api-access-9q429\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.228074 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-config-data\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.228187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-db-sync-config-data\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.228227 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q429\" (UniqueName: \"kubernetes.io/projected/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-kube-api-access-9q429\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.242837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-config-data\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.248199 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-db-sync-config-data\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.248467 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q429\" (UniqueName: \"kubernetes.io/projected/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-kube-api-access-9q429\") pod \"glance-db-sync-7zqqv\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.331129 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:45 crc kubenswrapper[4687]: I0131 07:12:45.761713 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-7zqqv"] Jan 31 07:12:46 crc kubenswrapper[4687]: I0131 07:12:46.149851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-7zqqv" event={"ID":"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95","Type":"ContainerStarted","Data":"b4f0f3d55dbaa80bb8924be45c5b673df200b3dddeb263abc8362d67849f4a10"} Jan 31 07:12:47 crc kubenswrapper[4687]: I0131 07:12:47.160966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-7zqqv" event={"ID":"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95","Type":"ContainerStarted","Data":"868ec0ba8c19d831cc419d77039b4bcd7558ea51858b1d529ff026917af5595c"} Jan 31 07:12:47 crc kubenswrapper[4687]: I0131 07:12:47.178617 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-7zqqv" podStartSLOduration=3.178596878 podStartE2EDuration="3.178596878s" podCreationTimestamp="2026-01-31 07:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:12:47.17284687 +0000 UTC m=+1793.450106445" watchObservedRunningTime="2026-01-31 07:12:47.178596878 +0000 UTC m=+1793.455856453" Jan 31 07:12:49 crc kubenswrapper[4687]: I0131 07:12:49.603747 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:12:49 crc kubenswrapper[4687]: E0131 07:12:49.604357 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:12:50 crc kubenswrapper[4687]: I0131 07:12:50.190499 4687 generic.go:334] "Generic (PLEG): container finished" podID="0f65f6b3-5ea2-4c44-bdf8-8557c3816f95" containerID="868ec0ba8c19d831cc419d77039b4bcd7558ea51858b1d529ff026917af5595c" exitCode=0 Jan 31 07:12:50 crc kubenswrapper[4687]: I0131 07:12:50.190559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-7zqqv" event={"ID":"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95","Type":"ContainerDied","Data":"868ec0ba8c19d831cc419d77039b4bcd7558ea51858b1d529ff026917af5595c"} Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.477616 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.530390 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-db-sync-config-data\") pod \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.530489 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-config-data\") pod \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.530684 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q429\" (UniqueName: \"kubernetes.io/projected/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-kube-api-access-9q429\") pod \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\" (UID: \"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95\") " Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.535702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-kube-api-access-9q429" (OuterVolumeSpecName: "kube-api-access-9q429") pod "0f65f6b3-5ea2-4c44-bdf8-8557c3816f95" (UID: "0f65f6b3-5ea2-4c44-bdf8-8557c3816f95"). InnerVolumeSpecName "kube-api-access-9q429". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.535972 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0f65f6b3-5ea2-4c44-bdf8-8557c3816f95" (UID: "0f65f6b3-5ea2-4c44-bdf8-8557c3816f95"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.570265 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-config-data" (OuterVolumeSpecName: "config-data") pod "0f65f6b3-5ea2-4c44-bdf8-8557c3816f95" (UID: "0f65f6b3-5ea2-4c44-bdf8-8557c3816f95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.632877 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q429\" (UniqueName: \"kubernetes.io/projected/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-kube-api-access-9q429\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.632903 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:51 crc kubenswrapper[4687]: I0131 07:12:51.632913 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:52 crc kubenswrapper[4687]: I0131 07:12:52.213123 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-7zqqv" event={"ID":"0f65f6b3-5ea2-4c44-bdf8-8557c3816f95","Type":"ContainerDied","Data":"b4f0f3d55dbaa80bb8924be45c5b673df200b3dddeb263abc8362d67849f4a10"} Jan 31 07:12:52 crc kubenswrapper[4687]: I0131 07:12:52.213167 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f0f3d55dbaa80bb8924be45c5b673df200b3dddeb263abc8362d67849f4a10" Jan 31 07:12:52 crc kubenswrapper[4687]: I0131 07:12:52.213190 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-7zqqv" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.496866 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:53 crc kubenswrapper[4687]: E0131 07:12:53.497213 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f65f6b3-5ea2-4c44-bdf8-8557c3816f95" containerName="glance-db-sync" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.497231 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f65f6b3-5ea2-4c44-bdf8-8557c3816f95" containerName="glance-db-sync" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.497380 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f65f6b3-5ea2-4c44-bdf8-8557c3816f95" containerName="glance-db-sync" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.498430 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.501752 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.501861 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-md9c8" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.503805 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.512649 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-nvme\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604622 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-dev\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604699 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604732 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-sys\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604754 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604768 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-run\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604784 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-lib-modules\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604799 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-scripts\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604844 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-config-data\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604872 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604904 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/661bffe5-fcf9-4fbd-b43f-28bce622b81b-kube-api-access-nt8ww\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-httpd-run\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.604940 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-logs\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.705750 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-config-data\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706155 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/661bffe5-fcf9-4fbd-b43f-28bce622b81b-kube-api-access-nt8ww\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706181 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-httpd-run\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706202 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-logs\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706228 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-nvme\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706280 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706314 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-dev\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706336 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706368 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-sys\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706391 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706432 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-run\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706444 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706488 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-lib-modules\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706688 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-sys\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706728 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-dev\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706785 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-run\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706784 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-lib-modules\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706627 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706854 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-scripts\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.706819 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.707156 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-httpd-run\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.707282 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-logs\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.707747 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-nvme\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.711974 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-config-data\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.717154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-scripts\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.728314 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.728431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.730974 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/661bffe5-fcf9-4fbd-b43f-28bce622b81b-kube-api-access-nt8ww\") pod \"glance-default-single-0\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.813470 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:53 crc kubenswrapper[4687]: I0131 07:12:53.844426 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:54 crc kubenswrapper[4687]: I0131 07:12:54.329430 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:55 crc kubenswrapper[4687]: I0131 07:12:55.236121 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"661bffe5-fcf9-4fbd-b43f-28bce622b81b","Type":"ContainerStarted","Data":"23447a9b6dfcc9b4e36dab2b6f5118655460c9925d3013845ad221c5da9d4c47"} Jan 31 07:12:55 crc kubenswrapper[4687]: I0131 07:12:55.236902 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"661bffe5-fcf9-4fbd-b43f-28bce622b81b","Type":"ContainerStarted","Data":"27cb82adf46965d62726c057139a0f4c4fa95b9f2ce750a2d915450ce454d7d8"} Jan 31 07:12:55 crc kubenswrapper[4687]: I0131 07:12:55.236916 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"661bffe5-fcf9-4fbd-b43f-28bce622b81b","Type":"ContainerStarted","Data":"0dacf24a991a38097f4774d7d150af7c21c943e6843ef951df3bf1328f9a3c05"} Jan 31 07:12:55 crc kubenswrapper[4687]: I0131 07:12:55.236280 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerName="glance-httpd" containerID="cri-o://23447a9b6dfcc9b4e36dab2b6f5118655460c9925d3013845ad221c5da9d4c47" gracePeriod=30 Jan 31 07:12:55 crc kubenswrapper[4687]: I0131 07:12:55.236239 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerName="glance-log" containerID="cri-o://27cb82adf46965d62726c057139a0f4c4fa95b9f2ce750a2d915450ce454d7d8" gracePeriod=30 Jan 31 07:12:55 crc kubenswrapper[4687]: I0131 07:12:55.273467 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=3.273441609 podStartE2EDuration="3.273441609s" podCreationTimestamp="2026-01-31 07:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:12:55.263775584 +0000 UTC m=+1801.541035229" watchObservedRunningTime="2026-01-31 07:12:55.273441609 +0000 UTC m=+1801.550701184" Jan 31 07:12:56 crc kubenswrapper[4687]: W0131 07:12:56.255814 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-0.slice/session-c48.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-0.slice/session-c48.scope: no such file or directory Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.257495 4687 generic.go:334] "Generic (PLEG): container finished" podID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerID="23447a9b6dfcc9b4e36dab2b6f5118655460c9925d3013845ad221c5da9d4c47" exitCode=143 Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.257532 4687 generic.go:334] "Generic (PLEG): container finished" podID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerID="27cb82adf46965d62726c057139a0f4c4fa95b9f2ce750a2d915450ce454d7d8" exitCode=143 Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.257554 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"661bffe5-fcf9-4fbd-b43f-28bce622b81b","Type":"ContainerDied","Data":"23447a9b6dfcc9b4e36dab2b6f5118655460c9925d3013845ad221c5da9d4c47"} Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.257580 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"661bffe5-fcf9-4fbd-b43f-28bce622b81b","Type":"ContainerDied","Data":"27cb82adf46965d62726c057139a0f4c4fa95b9f2ce750a2d915450ce454d7d8"} Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.399101 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548478 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-config-data\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548566 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-var-locks-brick\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548586 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-iscsi\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548606 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548621 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548643 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/661bffe5-fcf9-4fbd-b43f-28bce622b81b-kube-api-access-nt8ww\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548642 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548671 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-lib-modules\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548701 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-httpd-run\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548716 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-nvme\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-run\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548778 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-dev\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548792 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-sys\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548816 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-logs\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548837 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-scripts\") pod \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\" (UID: \"661bffe5-fcf9-4fbd-b43f-28bce622b81b\") " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.548994 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549021 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549177 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549189 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549199 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549248 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-sys" (OuterVolumeSpecName: "sys") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549298 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-run" (OuterVolumeSpecName: "run") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549317 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-dev" (OuterVolumeSpecName: "dev") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549366 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549440 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-logs" (OuterVolumeSpecName: "logs") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.549708 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.555673 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance-cache") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.555722 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661bffe5-fcf9-4fbd-b43f-28bce622b81b-kube-api-access-nt8ww" (OuterVolumeSpecName: "kube-api-access-nt8ww") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "kube-api-access-nt8ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.555725 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.555914 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-scripts" (OuterVolumeSpecName: "scripts") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.582450 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-config-data" (OuterVolumeSpecName: "config-data") pod "661bffe5-fcf9-4fbd-b43f-28bce622b81b" (UID: "661bffe5-fcf9-4fbd-b43f-28bce622b81b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650579 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650613 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650627 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/661bffe5-fcf9-4fbd-b43f-28bce622b81b-kube-api-access-nt8ww\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650639 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650648 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650657 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650665 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650674 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/661bffe5-fcf9-4fbd-b43f-28bce622b81b-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650682 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661bffe5-fcf9-4fbd-b43f-28bce622b81b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650689 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.650697 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661bffe5-fcf9-4fbd-b43f-28bce622b81b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.664308 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.664308 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.752811 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:56 crc kubenswrapper[4687]: I0131 07:12:56.752854 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.269383 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"661bffe5-fcf9-4fbd-b43f-28bce622b81b","Type":"ContainerDied","Data":"0dacf24a991a38097f4774d7d150af7c21c943e6843ef951df3bf1328f9a3c05"} Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.269458 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.269470 4687 scope.go:117] "RemoveContainer" containerID="23447a9b6dfcc9b4e36dab2b6f5118655460c9925d3013845ad221c5da9d4c47" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.290767 4687 scope.go:117] "RemoveContainer" containerID="27cb82adf46965d62726c057139a0f4c4fa95b9f2ce750a2d915450ce454d7d8" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.307263 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.312640 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.334264 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:57 crc kubenswrapper[4687]: E0131 07:12:57.334895 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerName="glance-httpd" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.334993 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerName="glance-httpd" Jan 31 07:12:57 crc kubenswrapper[4687]: E0131 07:12:57.335070 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerName="glance-log" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.335231 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerName="glance-log" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.335542 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerName="glance-httpd" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.335632 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" containerName="glance-log" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.338233 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.340330 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.340437 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.342238 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-md9c8" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.344185 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.462825 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-httpd-run\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.462868 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-config-data\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.462921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.462948 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-logs\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.462988 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463043 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-run\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463087 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-nvme\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463116 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-dev\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463145 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-lib-modules\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463165 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-sys\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9h8d\" (UniqueName: \"kubernetes.io/projected/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-kube-api-access-n9h8d\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463297 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-scripts\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.463328 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564677 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564731 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-run\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564814 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-nvme\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564836 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-dev\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564871 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-lib-modules\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564892 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-sys\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9h8d\" (UniqueName: \"kubernetes.io/projected/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-kube-api-access-n9h8d\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564956 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-scripts\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564977 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.564998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-httpd-run\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-config-data\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565057 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-logs\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-run\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565494 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-dev\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565519 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565542 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-lib-modules\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565590 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-sys\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565663 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-logs\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565706 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565762 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.566345 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-httpd-run\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565818 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-nvme\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.565769 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.580703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-config-data\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.581000 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-scripts\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.583131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9h8d\" (UniqueName: \"kubernetes.io/projected/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-kube-api-access-n9h8d\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.586253 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.597267 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.610318 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661bffe5-fcf9-4fbd-b43f-28bce622b81b" path="/var/lib/kubelet/pods/661bffe5-fcf9-4fbd-b43f-28bce622b81b/volumes" Jan 31 07:12:57 crc kubenswrapper[4687]: I0131 07:12:57.658404 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:12:58 crc kubenswrapper[4687]: I0131 07:12:58.078077 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:12:58 crc kubenswrapper[4687]: I0131 07:12:58.279236 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"e15729c6-dfd3-4296-85ec-4f56ddeb93cc","Type":"ContainerStarted","Data":"64caa539c61674b1c498e75d8b3d383aa3ca449e50c95916d402c9a8519e33e1"} Jan 31 07:12:59 crc kubenswrapper[4687]: I0131 07:12:59.287228 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"e15729c6-dfd3-4296-85ec-4f56ddeb93cc","Type":"ContainerStarted","Data":"08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1"} Jan 31 07:12:59 crc kubenswrapper[4687]: I0131 07:12:59.287708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"e15729c6-dfd3-4296-85ec-4f56ddeb93cc","Type":"ContainerStarted","Data":"b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b"} Jan 31 07:12:59 crc kubenswrapper[4687]: I0131 07:12:59.310746 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.310726099 podStartE2EDuration="2.310726099s" podCreationTimestamp="2026-01-31 07:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:12:59.308063136 +0000 UTC m=+1805.585322731" watchObservedRunningTime="2026-01-31 07:12:59.310726099 +0000 UTC m=+1805.587985674" Jan 31 07:13:03 crc kubenswrapper[4687]: I0131 07:13:03.604741 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:13:03 crc kubenswrapper[4687]: E0131 07:13:03.605818 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:13:07 crc kubenswrapper[4687]: I0131 07:13:07.658965 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:07 crc kubenswrapper[4687]: I0131 07:13:07.659265 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:07 crc kubenswrapper[4687]: I0131 07:13:07.706074 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:07 crc kubenswrapper[4687]: I0131 07:13:07.716930 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:08 crc kubenswrapper[4687]: I0131 07:13:08.353002 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:08 crc kubenswrapper[4687]: I0131 07:13:08.353059 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:10 crc kubenswrapper[4687]: I0131 07:13:10.572179 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:10 crc kubenswrapper[4687]: I0131 07:13:10.584682 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:13:10 crc kubenswrapper[4687]: I0131 07:13:10.600384 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.743753 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.747567 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.756171 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.757376 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.763102 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.768119 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847584 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-sys\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847653 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-lib-modules\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847741 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-nvme\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847788 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-scripts\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847860 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-sys\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847887 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847907 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-config-data\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847928 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-lib-modules\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.847972 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848002 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-run\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-run\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848050 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848083 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848107 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7lkc\" (UniqueName: \"kubernetes.io/projected/953845d5-c221-4b09-bba5-4f93f24f0a50-kube-api-access-t7lkc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848139 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-nvme\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848162 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-dev\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848183 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-logs\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848226 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhr7p\" (UniqueName: \"kubernetes.io/projected/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-kube-api-access-fhr7p\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848362 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848518 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-config-data\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848587 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-scripts\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848613 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-httpd-run\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848634 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-dev\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848699 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-logs\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.848719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-httpd-run\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-config-data\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-lib-modules\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950641 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950665 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-run\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950680 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-run\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-lib-modules\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950727 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950754 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-run\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950758 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-run\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7lkc\" (UniqueName: \"kubernetes.io/projected/953845d5-c221-4b09-bba5-4f93f24f0a50-kube-api-access-t7lkc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950878 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-nvme\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950905 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-dev\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950929 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-logs\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-nvme\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950957 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhr7p\" (UniqueName: \"kubernetes.io/projected/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-kube-api-access-fhr7p\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950991 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.951068 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.951127 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.951135 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.950964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-dev\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.951196 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.951076 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-config-data\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.951433 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-logs\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.956369 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-scripts\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.956512 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-httpd-run\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.956634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.956742 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.956870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-dev\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.956750 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-dev\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-httpd-run\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957105 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-logs\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957272 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-httpd-run\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957423 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-sys\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-lib-modules\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957732 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-nvme\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957834 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-httpd-run\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.958051 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-nvme\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.958077 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-lib-modules\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.958100 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957735 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-logs\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.958139 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.957776 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-sys\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.958440 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-scripts\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.958547 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-sys\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.958546 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-config-data\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.958800 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-sys\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.961519 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-config-data\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.963889 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-scripts\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.966772 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-scripts\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.968354 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhr7p\" (UniqueName: \"kubernetes.io/projected/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-kube-api-access-fhr7p\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.973114 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7lkc\" (UniqueName: \"kubernetes.io/projected/953845d5-c221-4b09-bba5-4f93f24f0a50-kube-api-access-t7lkc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.984591 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.984949 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.985833 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:13 crc kubenswrapper[4687]: I0131 07:13:13.988310 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-single-1\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:14 crc kubenswrapper[4687]: I0131 07:13:14.069550 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:14 crc kubenswrapper[4687]: I0131 07:13:14.084044 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:14 crc kubenswrapper[4687]: I0131 07:13:14.489380 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:13:14 crc kubenswrapper[4687]: W0131 07:13:14.494973 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953845d5_c221_4b09_bba5_4f93f24f0a50.slice/crio-8e21b621d36efa79d2b1a8f1149c6cd6b727e8f9810097384fa3986f240a1324 WatchSource:0}: Error finding container 8e21b621d36efa79d2b1a8f1149c6cd6b727e8f9810097384fa3986f240a1324: Status 404 returned error can't find the container with id 8e21b621d36efa79d2b1a8f1149c6cd6b727e8f9810097384fa3986f240a1324 Jan 31 07:13:14 crc kubenswrapper[4687]: I0131 07:13:14.544342 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 07:13:14 crc kubenswrapper[4687]: W0131 07:13:14.544951 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7b5bb75_8a85_4ca2_9163_2dc69788ead2.slice/crio-43b85f00bec39df5d67f67e23c3346ea956c1d457cb5fe851c30776e6437aa2e WatchSource:0}: Error finding container 43b85f00bec39df5d67f67e23c3346ea956c1d457cb5fe851c30776e6437aa2e: Status 404 returned error can't find the container with id 43b85f00bec39df5d67f67e23c3346ea956c1d457cb5fe851c30776e6437aa2e Jan 31 07:13:15 crc kubenswrapper[4687]: I0131 07:13:15.415943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"953845d5-c221-4b09-bba5-4f93f24f0a50","Type":"ContainerStarted","Data":"a0ee07972e49317e30ceb9091cc6fbbf14acf5f496d9ddd78474124138694924"} Jan 31 07:13:15 crc kubenswrapper[4687]: I0131 07:13:15.416526 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"953845d5-c221-4b09-bba5-4f93f24f0a50","Type":"ContainerStarted","Data":"579adeb0552c142412e59b63e529c041b072bf6f86a93b9761f547138b9cd720"} Jan 31 07:13:15 crc kubenswrapper[4687]: I0131 07:13:15.416549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"953845d5-c221-4b09-bba5-4f93f24f0a50","Type":"ContainerStarted","Data":"8e21b621d36efa79d2b1a8f1149c6cd6b727e8f9810097384fa3986f240a1324"} Jan 31 07:13:15 crc kubenswrapper[4687]: I0131 07:13:15.418458 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"b7b5bb75-8a85-4ca2-9163-2dc69788ead2","Type":"ContainerStarted","Data":"951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091"} Jan 31 07:13:15 crc kubenswrapper[4687]: I0131 07:13:15.418497 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"b7b5bb75-8a85-4ca2-9163-2dc69788ead2","Type":"ContainerStarted","Data":"ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747"} Jan 31 07:13:15 crc kubenswrapper[4687]: I0131 07:13:15.418511 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"b7b5bb75-8a85-4ca2-9163-2dc69788ead2","Type":"ContainerStarted","Data":"43b85f00bec39df5d67f67e23c3346ea956c1d457cb5fe851c30776e6437aa2e"} Jan 31 07:13:15 crc kubenswrapper[4687]: I0131 07:13:15.444354 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-2" podStartSLOduration=3.44432871 podStartE2EDuration="3.44432871s" podCreationTimestamp="2026-01-31 07:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:13:15.437628446 +0000 UTC m=+1821.714888021" watchObservedRunningTime="2026-01-31 07:13:15.44432871 +0000 UTC m=+1821.721588285" Jan 31 07:13:16 crc kubenswrapper[4687]: I0131 07:13:16.453878 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=4.453855487 podStartE2EDuration="4.453855487s" podCreationTimestamp="2026-01-31 07:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:13:16.453046135 +0000 UTC m=+1822.730305740" watchObservedRunningTime="2026-01-31 07:13:16.453855487 +0000 UTC m=+1822.731115062" Jan 31 07:13:18 crc kubenswrapper[4687]: I0131 07:13:18.604433 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:13:18 crc kubenswrapper[4687]: E0131 07:13:18.605698 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.070729 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.071218 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.084207 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.084541 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.096735 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.106836 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.109615 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.121919 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.491482 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.491851 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.491866 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:24 crc kubenswrapper[4687]: I0131 07:13:24.491879 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:26 crc kubenswrapper[4687]: I0131 07:13:26.633932 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:26 crc kubenswrapper[4687]: I0131 07:13:26.634069 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:13:26 crc kubenswrapper[4687]: I0131 07:13:26.661284 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:26 crc kubenswrapper[4687]: I0131 07:13:26.927152 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:26 crc kubenswrapper[4687]: I0131 07:13:26.927274 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:13:26 crc kubenswrapper[4687]: I0131 07:13:26.931734 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.115932 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.133980 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.518634 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-2" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-log" containerID="cri-o://ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747" gracePeriod=30 Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.518716 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-2" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-httpd" containerID="cri-o://951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091" gracePeriod=30 Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.518805 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-log" containerID="cri-o://579adeb0552c142412e59b63e529c041b072bf6f86a93b9761f547138b9cd720" gracePeriod=30 Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.518897 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-httpd" containerID="cri-o://a0ee07972e49317e30ceb9091cc6fbbf14acf5f496d9ddd78474124138694924" gracePeriod=30 Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.523935 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-2" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.141:9292/healthcheck\": EOF" Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.530440 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-1" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.140:9292/healthcheck\": EOF" Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.530986 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-2" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.141:9292/healthcheck\": EOF" Jan 31 07:13:28 crc kubenswrapper[4687]: I0131 07:13:28.531042 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-1" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.140:9292/healthcheck\": EOF" Jan 31 07:13:29 crc kubenswrapper[4687]: I0131 07:13:29.528170 4687 generic.go:334] "Generic (PLEG): container finished" podID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerID="ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747" exitCode=143 Jan 31 07:13:29 crc kubenswrapper[4687]: I0131 07:13:29.528260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"b7b5bb75-8a85-4ca2-9163-2dc69788ead2","Type":"ContainerDied","Data":"ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747"} Jan 31 07:13:29 crc kubenswrapper[4687]: I0131 07:13:29.530277 4687 generic.go:334] "Generic (PLEG): container finished" podID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerID="579adeb0552c142412e59b63e529c041b072bf6f86a93b9761f547138b9cd720" exitCode=143 Jan 31 07:13:29 crc kubenswrapper[4687]: I0131 07:13:29.530314 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"953845d5-c221-4b09-bba5-4f93f24f0a50","Type":"ContainerDied","Data":"579adeb0552c142412e59b63e529c041b072bf6f86a93b9761f547138b9cd720"} Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.310367 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.439867 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-config-data\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.439938 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-httpd-run\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.439961 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-iscsi\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440000 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-lib-modules\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440080 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440166 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440205 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-scripts\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440232 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-nvme\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440277 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-dev\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440301 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440343 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-dev" (OuterVolumeSpecName: "dev") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440383 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-logs\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440389 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440394 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.440974 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-var-locks-brick\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441045 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhr7p\" (UniqueName: \"kubernetes.io/projected/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-kube-api-access-fhr7p\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441066 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441093 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-run\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441185 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-sys\") pod \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\" (UID: \"b7b5bb75-8a85-4ca2-9163-2dc69788ead2\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441074 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-logs" (OuterVolumeSpecName: "logs") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441114 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441153 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-run" (OuterVolumeSpecName: "run") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441295 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-sys" (OuterVolumeSpecName: "sys") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441530 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441546 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441557 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441568 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441579 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441591 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441601 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441611 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.441622 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.445185 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.445277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.445280 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-scripts" (OuterVolumeSpecName: "scripts") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.449605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-kube-api-access-fhr7p" (OuterVolumeSpecName: "kube-api-access-fhr7p") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "kube-api-access-fhr7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.479532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-config-data" (OuterVolumeSpecName: "config-data") pod "b7b5bb75-8a85-4ca2-9163-2dc69788ead2" (UID: "b7b5bb75-8a85-4ca2-9163-2dc69788ead2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.543680 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.544011 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.544113 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.544127 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhr7p\" (UniqueName: \"kubernetes.io/projected/b7b5bb75-8a85-4ca2-9163-2dc69788ead2-kube-api-access-fhr7p\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.544151 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.557688 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.559167 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.564192 4687 generic.go:334] "Generic (PLEG): container finished" podID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerID="951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091" exitCode=0 Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.564258 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"b7b5bb75-8a85-4ca2-9163-2dc69788ead2","Type":"ContainerDied","Data":"951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091"} Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.564285 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"b7b5bb75-8a85-4ca2-9163-2dc69788ead2","Type":"ContainerDied","Data":"43b85f00bec39df5d67f67e23c3346ea956c1d457cb5fe851c30776e6437aa2e"} Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.564291 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.564311 4687 scope.go:117] "RemoveContainer" containerID="951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.571187 4687 generic.go:334] "Generic (PLEG): container finished" podID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerID="a0ee07972e49317e30ceb9091cc6fbbf14acf5f496d9ddd78474124138694924" exitCode=0 Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.571220 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"953845d5-c221-4b09-bba5-4f93f24f0a50","Type":"ContainerDied","Data":"a0ee07972e49317e30ceb9091cc6fbbf14acf5f496d9ddd78474124138694924"} Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.607585 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.617934 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.620660 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.637245 4687 scope.go:117] "RemoveContainer" containerID="ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.655771 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.655838 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.682305 4687 scope.go:117] "RemoveContainer" containerID="951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091" Jan 31 07:13:33 crc kubenswrapper[4687]: E0131 07:13:33.682666 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091\": container with ID starting with 951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091 not found: ID does not exist" containerID="951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.682697 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091"} err="failed to get container status \"951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091\": rpc error: code = NotFound desc = could not find container \"951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091\": container with ID starting with 951bc8fad34612675d2591ff08acf54dd26f3a9a2bf9c2bb7fc47d537f02e091 not found: ID does not exist" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.682717 4687 scope.go:117] "RemoveContainer" containerID="ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747" Jan 31 07:13:33 crc kubenswrapper[4687]: E0131 07:13:33.682941 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747\": container with ID starting with ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747 not found: ID does not exist" containerID="ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.682959 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747"} err="failed to get container status \"ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747\": rpc error: code = NotFound desc = could not find container \"ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747\": container with ID starting with ed733edf67d295e77bdf005b04772bb5bfed5d894bd7db22c4acd8a18c412747 not found: ID does not exist" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.784612 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.857883 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-nvme\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.857935 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.857963 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-dev\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.857986 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-var-locks-brick\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858039 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-lib-modules\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858107 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-run\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858131 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-sys\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858202 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-httpd-run\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858242 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7lkc\" (UniqueName: \"kubernetes.io/projected/953845d5-c221-4b09-bba5-4f93f24f0a50-kube-api-access-t7lkc\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858265 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-iscsi\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858303 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-logs\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858340 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-scripts\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858358 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858341 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858380 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858632 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-dev" (OuterVolumeSpecName: "dev") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858909 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-config-data\") pod \"953845d5-c221-4b09-bba5-4f93f24f0a50\" (UID: \"953845d5-c221-4b09-bba5-4f93f24f0a50\") " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858488 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-run" (OuterVolumeSpecName: "run") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858448 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-sys" (OuterVolumeSpecName: "sys") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858492 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.858502 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.859402 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.859439 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.859448 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.859459 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.859467 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.859475 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.859483 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/953845d5-c221-4b09-bba5-4f93f24f0a50-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.862965 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.863138 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-logs" (OuterVolumeSpecName: "logs") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.867228 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.867239 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.873669 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-scripts" (OuterVolumeSpecName: "scripts") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.876767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953845d5-c221-4b09-bba5-4f93f24f0a50-kube-api-access-t7lkc" (OuterVolumeSpecName: "kube-api-access-t7lkc") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "kube-api-access-t7lkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.915138 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-config-data" (OuterVolumeSpecName: "config-data") pod "953845d5-c221-4b09-bba5-4f93f24f0a50" (UID: "953845d5-c221-4b09-bba5-4f93f24f0a50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.961140 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7lkc\" (UniqueName: \"kubernetes.io/projected/953845d5-c221-4b09-bba5-4f93f24f0a50-kube-api-access-t7lkc\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.961652 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.961666 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.961736 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.961748 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953845d5-c221-4b09-bba5-4f93f24f0a50-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.961769 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.961798 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/953845d5-c221-4b09-bba5-4f93f24f0a50-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.979530 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 07:13:33 crc kubenswrapper[4687]: I0131 07:13:33.983327 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.063467 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.063500 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.581892 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"953845d5-c221-4b09-bba5-4f93f24f0a50","Type":"ContainerDied","Data":"8e21b621d36efa79d2b1a8f1149c6cd6b727e8f9810097384fa3986f240a1324"} Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.582258 4687 scope.go:117] "RemoveContainer" containerID="a0ee07972e49317e30ceb9091cc6fbbf14acf5f496d9ddd78474124138694924" Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.581915 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.593752 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"1458eddabbda7dfa359296e6f0341f043ba4048f4cd1df8854c7c04d61090c0a"} Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.616513 4687 scope.go:117] "RemoveContainer" containerID="579adeb0552c142412e59b63e529c041b072bf6f86a93b9761f547138b9cd720" Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.644108 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:13:34 crc kubenswrapper[4687]: I0131 07:13:34.657503 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 07:13:35 crc kubenswrapper[4687]: I0131 07:13:35.412620 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:13:35 crc kubenswrapper[4687]: I0131 07:13:35.412892 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerName="glance-log" containerID="cri-o://b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b" gracePeriod=30 Jan 31 07:13:35 crc kubenswrapper[4687]: I0131 07:13:35.412968 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerName="glance-httpd" containerID="cri-o://08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1" gracePeriod=30 Jan 31 07:13:35 crc kubenswrapper[4687]: I0131 07:13:35.610008 4687 generic.go:334] "Generic (PLEG): container finished" podID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerID="b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b" exitCode=143 Jan 31 07:13:35 crc kubenswrapper[4687]: I0131 07:13:35.611907 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" path="/var/lib/kubelet/pods/953845d5-c221-4b09-bba5-4f93f24f0a50/volumes" Jan 31 07:13:35 crc kubenswrapper[4687]: I0131 07:13:35.612651 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" path="/var/lib/kubelet/pods/b7b5bb75-8a85-4ca2-9163-2dc69788ead2/volumes" Jan 31 07:13:35 crc kubenswrapper[4687]: I0131 07:13:35.613217 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"e15729c6-dfd3-4296-85ec-4f56ddeb93cc","Type":"ContainerDied","Data":"b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b"} Jan 31 07:13:38 crc kubenswrapper[4687]: I0131 07:13:38.914958 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033102 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-config-data\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033168 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-run\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033210 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-iscsi\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033244 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-scripts\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033269 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-sys\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033272 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-run" (OuterVolumeSpecName: "run") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-lib-modules\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033289 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033325 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-sys" (OuterVolumeSpecName: "sys") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033330 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-logs\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033390 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-dev\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033439 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-nvme\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033475 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033495 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033511 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-var-locks-brick\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-httpd-run\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033574 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9h8d\" (UniqueName: \"kubernetes.io/projected/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-kube-api-access-n9h8d\") pod \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\" (UID: \"e15729c6-dfd3-4296-85ec-4f56ddeb93cc\") " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033779 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-logs" (OuterVolumeSpecName: "logs") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033893 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033909 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033921 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033934 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.033963 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.034097 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-dev" (OuterVolumeSpecName: "dev") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.034162 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.034825 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.034819 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.039239 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-scripts" (OuterVolumeSpecName: "scripts") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.039338 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance-cache") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.039505 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.040018 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-kube-api-access-n9h8d" (OuterVolumeSpecName: "kube-api-access-n9h8d") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "kube-api-access-n9h8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.069996 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-config-data" (OuterVolumeSpecName: "config-data") pod "e15729c6-dfd3-4296-85ec-4f56ddeb93cc" (UID: "e15729c6-dfd3-4296-85ec-4f56ddeb93cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135282 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135329 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135340 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135351 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135364 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135402 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135466 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135480 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135493 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.135693 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9h8d\" (UniqueName: \"kubernetes.io/projected/e15729c6-dfd3-4296-85ec-4f56ddeb93cc-kube-api-access-n9h8d\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.150754 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.151138 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.236983 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.237027 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.640531 4687 generic.go:334] "Generic (PLEG): container finished" podID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerID="08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1" exitCode=0 Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.640601 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"e15729c6-dfd3-4296-85ec-4f56ddeb93cc","Type":"ContainerDied","Data":"08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1"} Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.640969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"e15729c6-dfd3-4296-85ec-4f56ddeb93cc","Type":"ContainerDied","Data":"64caa539c61674b1c498e75d8b3d383aa3ca449e50c95916d402c9a8519e33e1"} Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.640624 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.641034 4687 scope.go:117] "RemoveContainer" containerID="08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.666672 4687 scope.go:117] "RemoveContainer" containerID="b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.674272 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.681319 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.703107 4687 scope.go:117] "RemoveContainer" containerID="08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1" Jan 31 07:13:39 crc kubenswrapper[4687]: E0131 07:13:39.703488 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1\": container with ID starting with 08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1 not found: ID does not exist" containerID="08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.703534 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1"} err="failed to get container status \"08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1\": rpc error: code = NotFound desc = could not find container \"08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1\": container with ID starting with 08005a3b6738d288af653fb0491e09c4bb0db2428560c014805bff64f8b688e1 not found: ID does not exist" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.703556 4687 scope.go:117] "RemoveContainer" containerID="b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b" Jan 31 07:13:39 crc kubenswrapper[4687]: E0131 07:13:39.704027 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b\": container with ID starting with b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b not found: ID does not exist" containerID="b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b" Jan 31 07:13:39 crc kubenswrapper[4687]: I0131 07:13:39.704079 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b"} err="failed to get container status \"b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b\": rpc error: code = NotFound desc = could not find container \"b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b\": container with ID starting with b751e2706038e274031fffe8e20b2114b89fe9b21172748c952372f9ce4e0b9b not found: ID does not exist" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.784472 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-7zqqv"] Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.792942 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-7zqqv"] Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826190 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance665d-account-delete-bzjxs"] Jan 31 07:13:40 crc kubenswrapper[4687]: E0131 07:13:40.826506 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826521 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: E0131 07:13:40.826538 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826545 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: E0131 07:13:40.826568 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826575 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: E0131 07:13:40.826584 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826590 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: E0131 07:13:40.826602 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826608 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: E0131 07:13:40.826797 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826802 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826946 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826963 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826981 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="953845d5-c221-4b09-bba5-4f93f24f0a50" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.826992 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b5bb75-8a85-4ca2-9163-2dc69788ead2" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.827000 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerName="glance-httpd" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.827009 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" containerName="glance-log" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.827503 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.844589 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance665d-account-delete-bzjxs"] Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.865020 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d932ca4-16f7-4a16-a9e4-fa21948385eb-operator-scripts\") pod \"glance665d-account-delete-bzjxs\" (UID: \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\") " pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.865114 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl5nq\" (UniqueName: \"kubernetes.io/projected/1d932ca4-16f7-4a16-a9e4-fa21948385eb-kube-api-access-kl5nq\") pod \"glance665d-account-delete-bzjxs\" (UID: \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\") " pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.966195 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d932ca4-16f7-4a16-a9e4-fa21948385eb-operator-scripts\") pod \"glance665d-account-delete-bzjxs\" (UID: \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\") " pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.966639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl5nq\" (UniqueName: \"kubernetes.io/projected/1d932ca4-16f7-4a16-a9e4-fa21948385eb-kube-api-access-kl5nq\") pod \"glance665d-account-delete-bzjxs\" (UID: \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\") " pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.967139 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d932ca4-16f7-4a16-a9e4-fa21948385eb-operator-scripts\") pod \"glance665d-account-delete-bzjxs\" (UID: \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\") " pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:40 crc kubenswrapper[4687]: I0131 07:13:40.986087 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl5nq\" (UniqueName: \"kubernetes.io/projected/1d932ca4-16f7-4a16-a9e4-fa21948385eb-kube-api-access-kl5nq\") pod \"glance665d-account-delete-bzjxs\" (UID: \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\") " pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:41 crc kubenswrapper[4687]: I0131 07:13:41.143366 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:41 crc kubenswrapper[4687]: I0131 07:13:41.611336 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f65f6b3-5ea2-4c44-bdf8-8557c3816f95" path="/var/lib/kubelet/pods/0f65f6b3-5ea2-4c44-bdf8-8557c3816f95/volumes" Jan 31 07:13:41 crc kubenswrapper[4687]: I0131 07:13:41.612379 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e15729c6-dfd3-4296-85ec-4f56ddeb93cc" path="/var/lib/kubelet/pods/e15729c6-dfd3-4296-85ec-4f56ddeb93cc/volumes" Jan 31 07:13:41 crc kubenswrapper[4687]: I0131 07:13:41.629286 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance665d-account-delete-bzjxs"] Jan 31 07:13:41 crc kubenswrapper[4687]: I0131 07:13:41.658240 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" event={"ID":"1d932ca4-16f7-4a16-a9e4-fa21948385eb","Type":"ContainerStarted","Data":"83c328133ac0316baaed77090f243e9a64b42400f38a7ac61da72b2f72a47ef7"} Jan 31 07:13:42 crc kubenswrapper[4687]: I0131 07:13:42.666443 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d932ca4-16f7-4a16-a9e4-fa21948385eb" containerID="8051433f29f81d9091193f67191cacde0216e9cd3220069dbf489487eaa05c08" exitCode=0 Jan 31 07:13:42 crc kubenswrapper[4687]: I0131 07:13:42.666519 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" event={"ID":"1d932ca4-16f7-4a16-a9e4-fa21948385eb","Type":"ContainerDied","Data":"8051433f29f81d9091193f67191cacde0216e9cd3220069dbf489487eaa05c08"} Jan 31 07:13:43 crc kubenswrapper[4687]: I0131 07:13:43.929954 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:13:43 crc kubenswrapper[4687]: I0131 07:13:43.931233 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:43 crc kubenswrapper[4687]: I0131 07:13:43.934816 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Jan 31 07:13:43 crc kubenswrapper[4687]: I0131 07:13:43.935605 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-vwlv6" Jan 31 07:13:43 crc kubenswrapper[4687]: I0131 07:13:43.936279 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Jan 31 07:13:43 crc kubenswrapper[4687]: I0131 07:13:43.935702 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Jan 31 07:13:43 crc kubenswrapper[4687]: I0131 07:13:43.937277 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:13:43 crc kubenswrapper[4687]: I0131 07:13:43.959258 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.010092 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d932ca4-16f7-4a16-a9e4-fa21948385eb-operator-scripts\") pod \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\" (UID: \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\") " Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.010256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl5nq\" (UniqueName: \"kubernetes.io/projected/1d932ca4-16f7-4a16-a9e4-fa21948385eb-kube-api-access-kl5nq\") pod \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\" (UID: \"1d932ca4-16f7-4a16-a9e4-fa21948385eb\") " Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.010532 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.010609 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9brt\" (UniqueName: \"kubernetes.io/projected/17078dd3-3694-49b1-8513-fcc5e9af5902-kube-api-access-c9brt\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.010678 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.010739 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-scripts\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.011551 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d932ca4-16f7-4a16-a9e4-fa21948385eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d932ca4-16f7-4a16-a9e4-fa21948385eb" (UID: "1d932ca4-16f7-4a16-a9e4-fa21948385eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.015687 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d932ca4-16f7-4a16-a9e4-fa21948385eb-kube-api-access-kl5nq" (OuterVolumeSpecName: "kube-api-access-kl5nq") pod "1d932ca4-16f7-4a16-a9e4-fa21948385eb" (UID: "1d932ca4-16f7-4a16-a9e4-fa21948385eb"). InnerVolumeSpecName "kube-api-access-kl5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.112473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9brt\" (UniqueName: \"kubernetes.io/projected/17078dd3-3694-49b1-8513-fcc5e9af5902-kube-api-access-c9brt\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.112545 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.112593 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-scripts\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.112686 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.112884 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d932ca4-16f7-4a16-a9e4-fa21948385eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.112902 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl5nq\" (UniqueName: \"kubernetes.io/projected/1d932ca4-16f7-4a16-a9e4-fa21948385eb-kube-api-access-kl5nq\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.352927 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9brt\" (UniqueName: \"kubernetes.io/projected/17078dd3-3694-49b1-8513-fcc5e9af5902-kube-api-access-c9brt\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.352925 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.352936 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-scripts\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.355812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret\") pod \"openstackclient\" (UID: \"17078dd3-3694-49b1-8513-fcc5e9af5902\") " pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.569770 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.690459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" event={"ID":"1d932ca4-16f7-4a16-a9e4-fa21948385eb","Type":"ContainerDied","Data":"83c328133ac0316baaed77090f243e9a64b42400f38a7ac61da72b2f72a47ef7"} Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.690499 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83c328133ac0316baaed77090f243e9a64b42400f38a7ac61da72b2f72a47ef7" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.690563 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance665d-account-delete-bzjxs" Jan 31 07:13:44 crc kubenswrapper[4687]: I0131 07:13:44.987757 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 07:13:44 crc kubenswrapper[4687]: W0131 07:13:44.988126 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17078dd3_3694_49b1_8513_fcc5e9af5902.slice/crio-d424f4bf43c5af4aa3237b4698a75c8292cc90a0bfba0c1f59432dac3dce70c0 WatchSource:0}: Error finding container d424f4bf43c5af4aa3237b4698a75c8292cc90a0bfba0c1f59432dac3dce70c0: Status 404 returned error can't find the container with id d424f4bf43c5af4aa3237b4698a75c8292cc90a0bfba0c1f59432dac3dce70c0 Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.699300 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"17078dd3-3694-49b1-8513-fcc5e9af5902","Type":"ContainerStarted","Data":"a3d9205bfbf1ed5c0a8ea32842d3ca50317ab6bf6341eb7473fd0ccf5eef7af8"} Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.699631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"17078dd3-3694-49b1-8513-fcc5e9af5902","Type":"ContainerStarted","Data":"d424f4bf43c5af4aa3237b4698a75c8292cc90a0bfba0c1f59432dac3dce70c0"} Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.713446 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=2.7134009199999998 podStartE2EDuration="2.71340092s" podCreationTimestamp="2026-01-31 07:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:13:45.711481868 +0000 UTC m=+1851.988741473" watchObservedRunningTime="2026-01-31 07:13:45.71340092 +0000 UTC m=+1851.990660485" Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.858551 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-6b297"] Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.864159 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-6b297"] Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.875111 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-665d-account-create-update-bp9g4"] Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.881870 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-665d-account-create-update-bp9g4"] Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.887551 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance665d-account-delete-bzjxs"] Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.894182 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance665d-account-delete-bzjxs"] Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.951533 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-844df"] Jan 31 07:13:45 crc kubenswrapper[4687]: E0131 07:13:45.951841 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d932ca4-16f7-4a16-a9e4-fa21948385eb" containerName="mariadb-account-delete" Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.951858 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d932ca4-16f7-4a16-a9e4-fa21948385eb" containerName="mariadb-account-delete" Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.952012 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d932ca4-16f7-4a16-a9e4-fa21948385eb" containerName="mariadb-account-delete" Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.952553 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:45 crc kubenswrapper[4687]: I0131 07:13:45.957855 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-844df"] Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.043217 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08143980-4935-4851-b898-5b47179db36e-operator-scripts\") pod \"glance-db-create-844df\" (UID: \"08143980-4935-4851-b898-5b47179db36e\") " pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.044088 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9wwm\" (UniqueName: \"kubernetes.io/projected/08143980-4935-4851-b898-5b47179db36e-kube-api-access-x9wwm\") pod \"glance-db-create-844df\" (UID: \"08143980-4935-4851-b898-5b47179db36e\") " pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.057290 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5"] Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.058236 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.062920 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.068775 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5"] Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.145318 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vszbv\" (UniqueName: \"kubernetes.io/projected/80d27888-0b55-47a9-9e0a-6743273844e5-kube-api-access-vszbv\") pod \"glance-bbaa-account-create-update-wlcr5\" (UID: \"80d27888-0b55-47a9-9e0a-6743273844e5\") " pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.145419 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08143980-4935-4851-b898-5b47179db36e-operator-scripts\") pod \"glance-db-create-844df\" (UID: \"08143980-4935-4851-b898-5b47179db36e\") " pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.145450 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9wwm\" (UniqueName: \"kubernetes.io/projected/08143980-4935-4851-b898-5b47179db36e-kube-api-access-x9wwm\") pod \"glance-db-create-844df\" (UID: \"08143980-4935-4851-b898-5b47179db36e\") " pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.145486 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d27888-0b55-47a9-9e0a-6743273844e5-operator-scripts\") pod \"glance-bbaa-account-create-update-wlcr5\" (UID: \"80d27888-0b55-47a9-9e0a-6743273844e5\") " pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.146539 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08143980-4935-4851-b898-5b47179db36e-operator-scripts\") pod \"glance-db-create-844df\" (UID: \"08143980-4935-4851-b898-5b47179db36e\") " pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.165659 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9wwm\" (UniqueName: \"kubernetes.io/projected/08143980-4935-4851-b898-5b47179db36e-kube-api-access-x9wwm\") pod \"glance-db-create-844df\" (UID: \"08143980-4935-4851-b898-5b47179db36e\") " pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.247268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vszbv\" (UniqueName: \"kubernetes.io/projected/80d27888-0b55-47a9-9e0a-6743273844e5-kube-api-access-vszbv\") pod \"glance-bbaa-account-create-update-wlcr5\" (UID: \"80d27888-0b55-47a9-9e0a-6743273844e5\") " pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.247436 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d27888-0b55-47a9-9e0a-6743273844e5-operator-scripts\") pod \"glance-bbaa-account-create-update-wlcr5\" (UID: \"80d27888-0b55-47a9-9e0a-6743273844e5\") " pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.248894 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d27888-0b55-47a9-9e0a-6743273844e5-operator-scripts\") pod \"glance-bbaa-account-create-update-wlcr5\" (UID: \"80d27888-0b55-47a9-9e0a-6743273844e5\") " pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.265273 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vszbv\" (UniqueName: \"kubernetes.io/projected/80d27888-0b55-47a9-9e0a-6743273844e5-kube-api-access-vszbv\") pod \"glance-bbaa-account-create-update-wlcr5\" (UID: \"80d27888-0b55-47a9-9e0a-6743273844e5\") " pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.271182 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.382089 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:46 crc kubenswrapper[4687]: W0131 07:13:46.688842 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08143980_4935_4851_b898_5b47179db36e.slice/crio-ee699e4f0d41f95f13016aa9b90b6095fb08c3611f41dcd8a906f40819bd65e8 WatchSource:0}: Error finding container ee699e4f0d41f95f13016aa9b90b6095fb08c3611f41dcd8a906f40819bd65e8: Status 404 returned error can't find the container with id ee699e4f0d41f95f13016aa9b90b6095fb08c3611f41dcd8a906f40819bd65e8 Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.690791 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-844df"] Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.708679 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-844df" event={"ID":"08143980-4935-4851-b898-5b47179db36e","Type":"ContainerStarted","Data":"ee699e4f0d41f95f13016aa9b90b6095fb08c3611f41dcd8a906f40819bd65e8"} Jan 31 07:13:46 crc kubenswrapper[4687]: I0131 07:13:46.797978 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5"] Jan 31 07:13:47 crc kubenswrapper[4687]: I0131 07:13:47.612147 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d932ca4-16f7-4a16-a9e4-fa21948385eb" path="/var/lib/kubelet/pods/1d932ca4-16f7-4a16-a9e4-fa21948385eb/volumes" Jan 31 07:13:47 crc kubenswrapper[4687]: I0131 07:13:47.613502 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8189cedc-a578-41ea-89ea-75af7e188168" path="/var/lib/kubelet/pods/8189cedc-a578-41ea-89ea-75af7e188168/volumes" Jan 31 07:13:47 crc kubenswrapper[4687]: I0131 07:13:47.614469 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2cbc54d-d182-4b8f-8e1d-a63b109bb41f" path="/var/lib/kubelet/pods/b2cbc54d-d182-4b8f-8e1d-a63b109bb41f/volumes" Jan 31 07:13:47 crc kubenswrapper[4687]: I0131 07:13:47.716223 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" event={"ID":"80d27888-0b55-47a9-9e0a-6743273844e5","Type":"ContainerStarted","Data":"dfe225a848b4dcd875b31d396dead41aebd8c8557d0ed6d237318a6400d0cebf"} Jan 31 07:13:47 crc kubenswrapper[4687]: I0131 07:13:47.716272 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" event={"ID":"80d27888-0b55-47a9-9e0a-6743273844e5","Type":"ContainerStarted","Data":"b0bbd830a3cff9ef4b4b64344899caaa33f2e2185a2b464ef289acae2b98ab13"} Jan 31 07:13:47 crc kubenswrapper[4687]: I0131 07:13:47.717509 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-844df" event={"ID":"08143980-4935-4851-b898-5b47179db36e","Type":"ContainerStarted","Data":"afa513251b126ee79e7fc5ce61450365d1fc9a490004cad8921400888003356f"} Jan 31 07:13:47 crc kubenswrapper[4687]: I0131 07:13:47.734171 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" podStartSLOduration=1.7341502709999999 podStartE2EDuration="1.734150271s" podCreationTimestamp="2026-01-31 07:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:13:47.729635058 +0000 UTC m=+1854.006894633" watchObservedRunningTime="2026-01-31 07:13:47.734150271 +0000 UTC m=+1854.011409846" Jan 31 07:13:47 crc kubenswrapper[4687]: I0131 07:13:47.745786 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-create-844df" podStartSLOduration=2.7457679390000003 podStartE2EDuration="2.745767939s" podCreationTimestamp="2026-01-31 07:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:13:47.743642471 +0000 UTC m=+1854.020902046" watchObservedRunningTime="2026-01-31 07:13:47.745767939 +0000 UTC m=+1854.023027504" Jan 31 07:13:48 crc kubenswrapper[4687]: I0131 07:13:48.727216 4687 generic.go:334] "Generic (PLEG): container finished" podID="08143980-4935-4851-b898-5b47179db36e" containerID="afa513251b126ee79e7fc5ce61450365d1fc9a490004cad8921400888003356f" exitCode=0 Jan 31 07:13:48 crc kubenswrapper[4687]: I0131 07:13:48.727307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-844df" event={"ID":"08143980-4935-4851-b898-5b47179db36e","Type":"ContainerDied","Data":"afa513251b126ee79e7fc5ce61450365d1fc9a490004cad8921400888003356f"} Jan 31 07:13:49 crc kubenswrapper[4687]: I0131 07:13:49.740991 4687 generic.go:334] "Generic (PLEG): container finished" podID="80d27888-0b55-47a9-9e0a-6743273844e5" containerID="dfe225a848b4dcd875b31d396dead41aebd8c8557d0ed6d237318a6400d0cebf" exitCode=0 Jan 31 07:13:49 crc kubenswrapper[4687]: I0131 07:13:49.741364 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" event={"ID":"80d27888-0b55-47a9-9e0a-6743273844e5","Type":"ContainerDied","Data":"dfe225a848b4dcd875b31d396dead41aebd8c8557d0ed6d237318a6400d0cebf"} Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.017634 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.101684 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9wwm\" (UniqueName: \"kubernetes.io/projected/08143980-4935-4851-b898-5b47179db36e-kube-api-access-x9wwm\") pod \"08143980-4935-4851-b898-5b47179db36e\" (UID: \"08143980-4935-4851-b898-5b47179db36e\") " Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.102791 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08143980-4935-4851-b898-5b47179db36e-operator-scripts\") pod \"08143980-4935-4851-b898-5b47179db36e\" (UID: \"08143980-4935-4851-b898-5b47179db36e\") " Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.103322 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08143980-4935-4851-b898-5b47179db36e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08143980-4935-4851-b898-5b47179db36e" (UID: "08143980-4935-4851-b898-5b47179db36e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.107423 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08143980-4935-4851-b898-5b47179db36e-kube-api-access-x9wwm" (OuterVolumeSpecName: "kube-api-access-x9wwm") pod "08143980-4935-4851-b898-5b47179db36e" (UID: "08143980-4935-4851-b898-5b47179db36e"). InnerVolumeSpecName "kube-api-access-x9wwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.204150 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9wwm\" (UniqueName: \"kubernetes.io/projected/08143980-4935-4851-b898-5b47179db36e-kube-api-access-x9wwm\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.204188 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08143980-4935-4851-b898-5b47179db36e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.749358 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-844df" Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.750398 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-844df" event={"ID":"08143980-4935-4851-b898-5b47179db36e","Type":"ContainerDied","Data":"ee699e4f0d41f95f13016aa9b90b6095fb08c3611f41dcd8a906f40819bd65e8"} Jan 31 07:13:50 crc kubenswrapper[4687]: I0131 07:13:50.750451 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee699e4f0d41f95f13016aa9b90b6095fb08c3611f41dcd8a906f40819bd65e8" Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.020155 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.116825 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vszbv\" (UniqueName: \"kubernetes.io/projected/80d27888-0b55-47a9-9e0a-6743273844e5-kube-api-access-vszbv\") pod \"80d27888-0b55-47a9-9e0a-6743273844e5\" (UID: \"80d27888-0b55-47a9-9e0a-6743273844e5\") " Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.117498 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d27888-0b55-47a9-9e0a-6743273844e5-operator-scripts\") pod \"80d27888-0b55-47a9-9e0a-6743273844e5\" (UID: \"80d27888-0b55-47a9-9e0a-6743273844e5\") " Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.117966 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d27888-0b55-47a9-9e0a-6743273844e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80d27888-0b55-47a9-9e0a-6743273844e5" (UID: "80d27888-0b55-47a9-9e0a-6743273844e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.122331 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d27888-0b55-47a9-9e0a-6743273844e5-kube-api-access-vszbv" (OuterVolumeSpecName: "kube-api-access-vszbv") pod "80d27888-0b55-47a9-9e0a-6743273844e5" (UID: "80d27888-0b55-47a9-9e0a-6743273844e5"). InnerVolumeSpecName "kube-api-access-vszbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.219598 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vszbv\" (UniqueName: \"kubernetes.io/projected/80d27888-0b55-47a9-9e0a-6743273844e5-kube-api-access-vszbv\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.219929 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d27888-0b55-47a9-9e0a-6743273844e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.759024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" event={"ID":"80d27888-0b55-47a9-9e0a-6743273844e5","Type":"ContainerDied","Data":"b0bbd830a3cff9ef4b4b64344899caaa33f2e2185a2b464ef289acae2b98ab13"} Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.759080 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0bbd830a3cff9ef4b4b64344899caaa33f2e2185a2b464ef289acae2b98ab13" Jan 31 07:13:51 crc kubenswrapper[4687]: I0131 07:13:51.759221 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.212781 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-4wg52"] Jan 31 07:13:56 crc kubenswrapper[4687]: E0131 07:13:56.213371 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08143980-4935-4851-b898-5b47179db36e" containerName="mariadb-database-create" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.213383 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="08143980-4935-4851-b898-5b47179db36e" containerName="mariadb-database-create" Jan 31 07:13:56 crc kubenswrapper[4687]: E0131 07:13:56.213421 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d27888-0b55-47a9-9e0a-6743273844e5" containerName="mariadb-account-create-update" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.213430 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d27888-0b55-47a9-9e0a-6743273844e5" containerName="mariadb-account-create-update" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.213582 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d27888-0b55-47a9-9e0a-6743273844e5" containerName="mariadb-account-create-update" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.213605 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="08143980-4935-4851-b898-5b47179db36e" containerName="mariadb-database-create" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.214141 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.218126 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-rc7jr" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.219564 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.221585 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-4wg52"] Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.308489 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-db-sync-config-data\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.308603 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmffb\" (UniqueName: \"kubernetes.io/projected/309e722b-24cb-44b9-8afe-7c131a789fa5-kube-api-access-vmffb\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.308656 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-config-data\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.409920 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-db-sync-config-data\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.410318 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmffb\" (UniqueName: \"kubernetes.io/projected/309e722b-24cb-44b9-8afe-7c131a789fa5-kube-api-access-vmffb\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.410361 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-config-data\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.426260 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-db-sync-config-data\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.427886 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-config-data\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.442711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmffb\" (UniqueName: \"kubernetes.io/projected/309e722b-24cb-44b9-8afe-7c131a789fa5-kube-api-access-vmffb\") pod \"glance-db-sync-4wg52\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.530910 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:13:56 crc kubenswrapper[4687]: I0131 07:13:56.974547 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-4wg52"] Jan 31 07:13:57 crc kubenswrapper[4687]: I0131 07:13:57.814490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-4wg52" event={"ID":"309e722b-24cb-44b9-8afe-7c131a789fa5","Type":"ContainerStarted","Data":"7c90efcf32d96cb6e664df07f9eafde2a35a9d4b4af2f5a6085b97dabefc3e4d"} Jan 31 07:13:57 crc kubenswrapper[4687]: I0131 07:13:57.814898 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-4wg52" event={"ID":"309e722b-24cb-44b9-8afe-7c131a789fa5","Type":"ContainerStarted","Data":"5c6e52b8e39c71d45006b4d7c2bdf4f60b72f815eaacf7372d213c2611172c6b"} Jan 31 07:13:57 crc kubenswrapper[4687]: I0131 07:13:57.827085 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-4wg52" podStartSLOduration=1.8270686139999999 podStartE2EDuration="1.827068614s" podCreationTimestamp="2026-01-31 07:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:13:57.826540669 +0000 UTC m=+1864.103800254" watchObservedRunningTime="2026-01-31 07:13:57.827068614 +0000 UTC m=+1864.104328189" Jan 31 07:14:00 crc kubenswrapper[4687]: I0131 07:14:00.838766 4687 generic.go:334] "Generic (PLEG): container finished" podID="309e722b-24cb-44b9-8afe-7c131a789fa5" containerID="7c90efcf32d96cb6e664df07f9eafde2a35a9d4b4af2f5a6085b97dabefc3e4d" exitCode=0 Jan 31 07:14:00 crc kubenswrapper[4687]: I0131 07:14:00.838867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-4wg52" event={"ID":"309e722b-24cb-44b9-8afe-7c131a789fa5","Type":"ContainerDied","Data":"7c90efcf32d96cb6e664df07f9eafde2a35a9d4b4af2f5a6085b97dabefc3e4d"} Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.120310 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.209234 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-db-sync-config-data\") pod \"309e722b-24cb-44b9-8afe-7c131a789fa5\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.209291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmffb\" (UniqueName: \"kubernetes.io/projected/309e722b-24cb-44b9-8afe-7c131a789fa5-kube-api-access-vmffb\") pod \"309e722b-24cb-44b9-8afe-7c131a789fa5\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.209325 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-config-data\") pod \"309e722b-24cb-44b9-8afe-7c131a789fa5\" (UID: \"309e722b-24cb-44b9-8afe-7c131a789fa5\") " Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.214672 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "309e722b-24cb-44b9-8afe-7c131a789fa5" (UID: "309e722b-24cb-44b9-8afe-7c131a789fa5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.214715 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309e722b-24cb-44b9-8afe-7c131a789fa5-kube-api-access-vmffb" (OuterVolumeSpecName: "kube-api-access-vmffb") pod "309e722b-24cb-44b9-8afe-7c131a789fa5" (UID: "309e722b-24cb-44b9-8afe-7c131a789fa5"). InnerVolumeSpecName "kube-api-access-vmffb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.248674 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-config-data" (OuterVolumeSpecName: "config-data") pod "309e722b-24cb-44b9-8afe-7c131a789fa5" (UID: "309e722b-24cb-44b9-8afe-7c131a789fa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.311246 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.311300 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmffb\" (UniqueName: \"kubernetes.io/projected/309e722b-24cb-44b9-8afe-7c131a789fa5-kube-api-access-vmffb\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.311315 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309e722b-24cb-44b9-8afe-7c131a789fa5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.857371 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-4wg52" event={"ID":"309e722b-24cb-44b9-8afe-7c131a789fa5","Type":"ContainerDied","Data":"5c6e52b8e39c71d45006b4d7c2bdf4f60b72f815eaacf7372d213c2611172c6b"} Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.857437 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c6e52b8e39c71d45006b4d7c2bdf4f60b72f815eaacf7372d213c2611172c6b" Jan 31 07:14:02 crc kubenswrapper[4687]: I0131 07:14:02.857497 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-4wg52" Jan 31 07:14:03 crc kubenswrapper[4687]: I0131 07:14:03.053577 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/root-account-create-update-gr7cz"] Jan 31 07:14:03 crc kubenswrapper[4687]: I0131 07:14:03.059103 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/root-account-create-update-gr7cz"] Jan 31 07:14:03 crc kubenswrapper[4687]: I0131 07:14:03.611163 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6" path="/var/lib/kubelet/pods/f30c8a06-e4ce-4647-aec5-e2cdbd4c04c6/volumes" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.049447 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:14:04 crc kubenswrapper[4687]: E0131 07:14:04.050109 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309e722b-24cb-44b9-8afe-7c131a789fa5" containerName="glance-db-sync" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.050129 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="309e722b-24cb-44b9-8afe-7c131a789fa5" containerName="glance-db-sync" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.050278 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="309e722b-24cb-44b9-8afe-7c131a789fa5" containerName="glance-db-sync" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.051298 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.054294 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.054653 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-rc7jr" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.054793 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.076986 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.138633 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-sys\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.138681 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-config-data\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.138708 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-logs\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.138731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.138752 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.138778 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.139026 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.139128 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-dev\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.139221 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.139285 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.139660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-run\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.139756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-scripts\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.139782 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85w7\" (UniqueName: \"kubernetes.io/projected/27fb68cd-53bd-4337-b199-605b7c23c33b-kube-api-access-b85w7\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.139810 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.238567 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.239662 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241256 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241430 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241449 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-sys\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241506 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-sys\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-config-data\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-logs\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241623 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241642 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241672 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241760 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241796 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-dev\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241812 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241761 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241905 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.242290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-logs\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241990 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.242066 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.242182 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.241963 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-dev\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.242376 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-run\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.242435 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-run\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.242378 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.242521 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-scripts\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.242550 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85w7\" (UniqueName: \"kubernetes.io/projected/27fb68cd-53bd-4337-b199-605b7c23c33b-kube-api-access-b85w7\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.246574 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-scripts\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.247578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-config-data\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.259789 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.270771 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.278176 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85w7\" (UniqueName: \"kubernetes.io/projected/27fb68cd-53bd-4337-b199-605b7c23c33b-kube-api-access-b85w7\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.281198 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-1\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.317375 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.320265 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.322930 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.330364 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.338192 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.339768 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.348203 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.348283 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.348335 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-dev\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.348355 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-run\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.348382 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqgqr\" (UniqueName: \"kubernetes.io/projected/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-kube-api-access-nqgqr\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.348431 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.348464 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.348489 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.349194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.349240 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-logs\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.349275 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.349350 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.349371 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.349399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-sys\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.368972 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.369164 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.451918 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-scripts\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.451979 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452014 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-config-data\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452065 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452088 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452111 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-sys\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452141 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452164 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452213 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-logs\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452286 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-run\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452311 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85c7b\" (UniqueName: \"kubernetes.io/projected/31f17d8d-4014-4805-b74a-8bca66f58e8f-kube-api-access-85c7b\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452359 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-dev\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452391 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-logs\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452431 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.452475 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-sys\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453278 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-run\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453319 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453337 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453359 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453387 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453422 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-sys\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453448 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453475 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453506 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453539 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-dev\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453562 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453582 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-logs\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453605 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-run\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453626 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxwm\" (UniqueName: \"kubernetes.io/projected/7def500d-1af6-481b-b69e-6bd383df2252-kube-api-access-fvxwm\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453666 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453691 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453715 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453742 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453769 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqgqr\" (UniqueName: \"kubernetes.io/projected/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-kube-api-access-nqgqr\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453791 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-dev\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.453811 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-logs\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.454290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.454320 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-sys\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.454431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.454642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-dev\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.454662 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.454691 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-run\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.454713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.454749 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.455121 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.461113 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.462303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.477305 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqgqr\" (UniqueName: \"kubernetes.io/projected/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-kube-api-access-nqgqr\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.487109 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.488711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.554799 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-run\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555149 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555190 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555207 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-sys\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555225 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555246 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.554977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-run\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555324 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-sys\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555270 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555368 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-logs\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555506 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxwm\" (UniqueName: \"kubernetes.io/projected/7def500d-1af6-481b-b69e-6bd383df2252-kube-api-access-fvxwm\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555524 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555562 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555585 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555623 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-dev\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555661 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-scripts\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555718 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555651 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555819 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555334 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556049 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556113 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556189 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556220 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-dev\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-logs\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.555775 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-config-data\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556793 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556834 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-sys\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.556907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.557006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-run\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.557041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85c7b\" (UniqueName: \"kubernetes.io/projected/31f17d8d-4014-4805-b74a-8bca66f58e8f-kube-api-access-85c7b\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.557077 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.557112 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-dev\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.557183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-logs\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.558606 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.559569 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-sys\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.559634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-run\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.559671 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.560482 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.560549 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-dev\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.562639 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-scripts\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.562877 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.567920 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.570451 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-config-data\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.583273 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxwm\" (UniqueName: \"kubernetes.io/projected/7def500d-1af6-481b-b69e-6bd383df2252-kube-api-access-fvxwm\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.584286 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-logs\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.590934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85c7b\" (UniqueName: \"kubernetes.io/projected/31f17d8d-4014-4805-b74a-8bca66f58e8f-kube-api-access-85c7b\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.605146 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.605383 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.611359 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.612983 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.626924 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.641313 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.658469 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.904898 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:14:04 crc kubenswrapper[4687]: I0131 07:14:04.928582 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:04 crc kubenswrapper[4687]: W0131 07:14:04.940002 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d7bf209_cf34_4f0c_89ff_b1d92df146c0.slice/crio-9304a720ab6c2249857d8bf9347dd6e27510bb964b833155cdff17dc766390a0 WatchSource:0}: Error finding container 9304a720ab6c2249857d8bf9347dd6e27510bb964b833155cdff17dc766390a0: Status 404 returned error can't find the container with id 9304a720ab6c2249857d8bf9347dd6e27510bb964b833155cdff17dc766390a0 Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.004317 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:05 crc kubenswrapper[4687]: W0131 07:14:05.017641 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31f17d8d_4014_4805_b74a_8bca66f58e8f.slice/crio-5df82e469df70e4f4ef2915146df3d704476e8d5b2c02c705faa8e0f8dc6e968 WatchSource:0}: Error finding container 5df82e469df70e4f4ef2915146df3d704476e8d5b2c02c705faa8e0f8dc6e968: Status 404 returned error can't find the container with id 5df82e469df70e4f4ef2915146df3d704476e8d5b2c02c705faa8e0f8dc6e968 Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.203442 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.283227 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.897679 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"27fb68cd-53bd-4337-b199-605b7c23c33b","Type":"ContainerStarted","Data":"0a543146aebf04be8a1e68d15aa1a9e28e0487231e42db01fb1425c4edac7936"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.898665 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"27fb68cd-53bd-4337-b199-605b7c23c33b","Type":"ContainerStarted","Data":"044b087f1de9a148230e1198bf558c0aa8fe71f1ffb75b8d40f78b3c43f288d7"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.898682 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"27fb68cd-53bd-4337-b199-605b7c23c33b","Type":"ContainerStarted","Data":"44b9ecc73876a5faac23e7ab0fc7f91342f51c518fdc7bce3093ef6b7a66eeda"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.901140 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"31f17d8d-4014-4805-b74a-8bca66f58e8f","Type":"ContainerStarted","Data":"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.901226 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"31f17d8d-4014-4805-b74a-8bca66f58e8f","Type":"ContainerStarted","Data":"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.901266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"31f17d8d-4014-4805-b74a-8bca66f58e8f","Type":"ContainerStarted","Data":"5df82e469df70e4f4ef2915146df3d704476e8d5b2c02c705faa8e0f8dc6e968"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.901222 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerName="glance-log" containerID="cri-o://ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137" gracePeriod=30 Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.901277 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerName="glance-httpd" containerID="cri-o://23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6" gracePeriod=30 Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.903918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"7d7bf209-cf34-4f0c-89ff-b1d92df146c0","Type":"ContainerStarted","Data":"8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.904038 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"7d7bf209-cf34-4f0c-89ff-b1d92df146c0","Type":"ContainerStarted","Data":"957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.904124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"7d7bf209-cf34-4f0c-89ff-b1d92df146c0","Type":"ContainerStarted","Data":"9304a720ab6c2249857d8bf9347dd6e27510bb964b833155cdff17dc766390a0"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.911626 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"7def500d-1af6-481b-b69e-6bd383df2252","Type":"ContainerStarted","Data":"feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.911683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"7def500d-1af6-481b-b69e-6bd383df2252","Type":"ContainerStarted","Data":"97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.911699 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"7def500d-1af6-481b-b69e-6bd383df2252","Type":"ContainerStarted","Data":"64d423bd04c526f378c0aa9696384e6c8db5b98520e252d17c8f0fefee2d6e0a"} Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.925029 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=1.925011167 podStartE2EDuration="1.925011167s" podCreationTimestamp="2026-01-31 07:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:14:05.922586061 +0000 UTC m=+1872.199845636" watchObservedRunningTime="2026-01-31 07:14:05.925011167 +0000 UTC m=+1872.202270742" Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.951822 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.951794489 podStartE2EDuration="2.951794489s" podCreationTimestamp="2026-01-31 07:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:14:05.946103534 +0000 UTC m=+1872.223363119" watchObservedRunningTime="2026-01-31 07:14:05.951794489 +0000 UTC m=+1872.229054084" Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.975523 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=2.975504287 podStartE2EDuration="2.975504287s" podCreationTimestamp="2026-01-31 07:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:14:05.969776401 +0000 UTC m=+1872.247035996" watchObservedRunningTime="2026-01-31 07:14:05.975504287 +0000 UTC m=+1872.252763872" Jan 31 07:14:05 crc kubenswrapper[4687]: I0131 07:14:05.998713 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.99868142 podStartE2EDuration="2.99868142s" podCreationTimestamp="2026-01-31 07:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:14:05.991445393 +0000 UTC m=+1872.268705018" watchObservedRunningTime="2026-01-31 07:14:05.99868142 +0000 UTC m=+1872.275941015" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.316450 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.494359 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-run\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.494862 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-run" (OuterVolumeSpecName: "run") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.494933 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-lib-modules\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.495096 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.495246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85c7b\" (UniqueName: \"kubernetes.io/projected/31f17d8d-4014-4805-b74a-8bca66f58e8f-kube-api-access-85c7b\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.495433 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-httpd-run\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.495895 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-config-data\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.496599 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.496662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.496697 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-nvme\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.496779 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-scripts\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.496815 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-sys\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.496906 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-iscsi\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.496955 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-dev\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.496976 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-var-locks-brick\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.497010 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-logs\") pod \"31f17d8d-4014-4805-b74a-8bca66f58e8f\" (UID: \"31f17d8d-4014-4805-b74a-8bca66f58e8f\") " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.497793 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.497816 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.495848 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.498117 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-logs" (OuterVolumeSpecName: "logs") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.498384 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-sys" (OuterVolumeSpecName: "sys") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.499522 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.499538 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-dev" (OuterVolumeSpecName: "dev") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.499567 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.499595 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.503430 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.503609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.507650 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f17d8d-4014-4805-b74a-8bca66f58e8f-kube-api-access-85c7b" (OuterVolumeSpecName: "kube-api-access-85c7b") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "kube-api-access-85c7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.520958 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-scripts" (OuterVolumeSpecName: "scripts") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.555557 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-config-data" (OuterVolumeSpecName: "config-data") pod "31f17d8d-4014-4805-b74a-8bca66f58e8f" (UID: "31f17d8d-4014-4805-b74a-8bca66f58e8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599777 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599812 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599821 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599830 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85c7b\" (UniqueName: \"kubernetes.io/projected/31f17d8d-4014-4805-b74a-8bca66f58e8f-kube-api-access-85c7b\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599839 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31f17d8d-4014-4805-b74a-8bca66f58e8f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599848 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599878 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599892 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599901 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599910 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f17d8d-4014-4805-b74a-8bca66f58e8f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599920 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.599928 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/31f17d8d-4014-4805-b74a-8bca66f58e8f-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.613039 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.613620 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.701800 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.702123 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.937149 4687 generic.go:334] "Generic (PLEG): container finished" podID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerID="23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6" exitCode=143 Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.937623 4687 generic.go:334] "Generic (PLEG): container finished" podID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerID="ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137" exitCode=143 Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.937347 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"31f17d8d-4014-4805-b74a-8bca66f58e8f","Type":"ContainerDied","Data":"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6"} Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.937746 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"31f17d8d-4014-4805-b74a-8bca66f58e8f","Type":"ContainerDied","Data":"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137"} Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.937777 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"31f17d8d-4014-4805-b74a-8bca66f58e8f","Type":"ContainerDied","Data":"5df82e469df70e4f4ef2915146df3d704476e8d5b2c02c705faa8e0f8dc6e968"} Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.937800 4687 scope.go:117] "RemoveContainer" containerID="23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.937292 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.964520 4687 scope.go:117] "RemoveContainer" containerID="ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.976146 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.979177 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.999550 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:06 crc kubenswrapper[4687]: E0131 07:14:06.999884 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerName="glance-httpd" Jan 31 07:14:06 crc kubenswrapper[4687]: I0131 07:14:06.999901 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerName="glance-httpd" Jan 31 07:14:07 crc kubenswrapper[4687]: E0131 07:14:06.999928 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerName="glance-log" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:06.999934 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerName="glance-log" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.000056 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerName="glance-httpd" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.000067 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" containerName="glance-log" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.000800 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.040222 4687 scope.go:117] "RemoveContainer" containerID="23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6" Jan 31 07:14:07 crc kubenswrapper[4687]: E0131 07:14:07.040690 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6\": container with ID starting with 23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6 not found: ID does not exist" containerID="23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.040753 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6"} err="failed to get container status \"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6\": rpc error: code = NotFound desc = could not find container \"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6\": container with ID starting with 23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6 not found: ID does not exist" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.040787 4687 scope.go:117] "RemoveContainer" containerID="ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137" Jan 31 07:14:07 crc kubenswrapper[4687]: E0131 07:14:07.043187 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137\": container with ID starting with ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137 not found: ID does not exist" containerID="ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.043234 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137"} err="failed to get container status \"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137\": rpc error: code = NotFound desc = could not find container \"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137\": container with ID starting with ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137 not found: ID does not exist" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.043266 4687 scope.go:117] "RemoveContainer" containerID="23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.046580 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6"} err="failed to get container status \"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6\": rpc error: code = NotFound desc = could not find container \"23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6\": container with ID starting with 23e0f83faa183590f2123e56426ee96cc9ba4d9d7046cf1f770cdc55454170a6 not found: ID does not exist" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.046652 4687 scope.go:117] "RemoveContainer" containerID="ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.047191 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137"} err="failed to get container status \"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137\": rpc error: code = NotFound desc = could not find container \"ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137\": container with ID starting with ada35770f8ec4f42d1a263e84a44fcd0e1a09c723c0826a66afc799d06a95137 not found: ID does not exist" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.052473 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.078113 4687 scope.go:117] "RemoveContainer" containerID="187adf814d4cf77a90d93aee991fe42fee11395e53bf29c5b943d6964fffd080" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.111704 4687 scope.go:117] "RemoveContainer" containerID="3aeffc916158a4595c408f8e8d60856618b65d4f07e521145895c853299dc813" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.131066 4687 scope.go:117] "RemoveContainer" containerID="0e78e6ac18d5619d5c826f399b2ce819b7345ab20fe6f9a27a73c7ce49ea50b0" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.213915 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584gn\" (UniqueName: \"kubernetes.io/projected/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-kube-api-access-584gn\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.213955 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-config-data\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214001 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214022 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-scripts\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-dev\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-sys\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214347 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-run\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214363 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214380 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-logs\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214445 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.214516 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316184 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-run\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316234 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316255 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-logs\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316289 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316338 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316384 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584gn\" (UniqueName: \"kubernetes.io/projected/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-kube-api-access-584gn\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316403 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316490 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-config-data\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316509 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-scripts\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316526 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316561 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-dev\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316601 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-sys\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316676 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-sys\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.316710 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-run\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.317133 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.317341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-logs\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.317451 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.317520 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.317581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.317629 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.317804 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-dev\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.318081 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.318537 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.324051 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-scripts\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.326442 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-config-data\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.335671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584gn\" (UniqueName: \"kubernetes.io/projected/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-kube-api-access-584gn\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.340764 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.350658 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-1\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.396763 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.611384 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f17d8d-4014-4805-b74a-8bca66f58e8f" path="/var/lib/kubelet/pods/31f17d8d-4014-4805-b74a-8bca66f58e8f/volumes" Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.828798 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:14:07 crc kubenswrapper[4687]: I0131 07:14:07.948012 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"a9f97349-9bfe-4c6e-bddb-a40db8f381b0","Type":"ContainerStarted","Data":"91b7b4ad74a05c18cc24ac9640d25a107c79807b73ea59fca60c2551b3889a8a"} Jan 31 07:14:08 crc kubenswrapper[4687]: I0131 07:14:08.959480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"a9f97349-9bfe-4c6e-bddb-a40db8f381b0","Type":"ContainerStarted","Data":"92a1c548184fd98a8308bf19adae1ca910f7fadc76ee7fe6650a340855d405ff"} Jan 31 07:14:08 crc kubenswrapper[4687]: I0131 07:14:08.960113 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"a9f97349-9bfe-4c6e-bddb-a40db8f381b0","Type":"ContainerStarted","Data":"f4b34b1fa14b81512e9a2bb2b6de67d2f5f4aa403f74b9ca214266b2c2a9ab90"} Jan 31 07:14:08 crc kubenswrapper[4687]: I0131 07:14:08.989123 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=2.98908252 podStartE2EDuration="2.98908252s" podCreationTimestamp="2026-01-31 07:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:14:08.983091196 +0000 UTC m=+1875.260350771" watchObservedRunningTime="2026-01-31 07:14:08.98908252 +0000 UTC m=+1875.266342105" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.369382 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.369771 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.397012 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.408886 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.627771 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.627818 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.641536 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.641940 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.652788 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.662513 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.678821 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:14 crc kubenswrapper[4687]: I0131 07:14:14.683263 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:15 crc kubenswrapper[4687]: I0131 07:14:15.077014 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:15 crc kubenswrapper[4687]: I0131 07:14:15.077060 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:15 crc kubenswrapper[4687]: I0131 07:14:15.077074 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:15 crc kubenswrapper[4687]: I0131 07:14:15.077085 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:15 crc kubenswrapper[4687]: I0131 07:14:15.077097 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:15 crc kubenswrapper[4687]: I0131 07:14:15.077107 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.278548 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.280331 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.298857 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.333070 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.333159 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.399634 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.399708 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.399719 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.401429 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.514462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.539100 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.539184 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.551501 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:17 crc kubenswrapper[4687]: I0131 07:14:17.552359 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:18 crc kubenswrapper[4687]: I0131 07:14:18.098749 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:18 crc kubenswrapper[4687]: I0131 07:14:18.098800 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:19 crc kubenswrapper[4687]: I0131 07:14:19.108470 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-httpd" containerID="cri-o://8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63" gracePeriod=30 Jan 31 07:14:19 crc kubenswrapper[4687]: I0131 07:14:19.108446 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-log" containerID="cri-o://957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a" gracePeriod=30 Jan 31 07:14:19 crc kubenswrapper[4687]: I0131 07:14:19.117327 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.148:9292/healthcheck\": EOF" Jan 31 07:14:19 crc kubenswrapper[4687]: I0131 07:14:19.117484 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.148:9292/healthcheck\": EOF" Jan 31 07:14:20 crc kubenswrapper[4687]: I0131 07:14:20.120954 4687 generic.go:334] "Generic (PLEG): container finished" podID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerID="957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a" exitCode=143 Jan 31 07:14:20 crc kubenswrapper[4687]: I0131 07:14:20.121044 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"7d7bf209-cf34-4f0c-89ff-b1d92df146c0","Type":"ContainerDied","Data":"957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a"} Jan 31 07:14:20 crc kubenswrapper[4687]: I0131 07:14:20.141523 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:20 crc kubenswrapper[4687]: I0131 07:14:20.141620 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:14:20 crc kubenswrapper[4687]: I0131 07:14:20.164990 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:14:20 crc kubenswrapper[4687]: I0131 07:14:20.208008 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:20 crc kubenswrapper[4687]: I0131 07:14:20.208330 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="7def500d-1af6-481b-b69e-6bd383df2252" containerName="glance-log" containerID="cri-o://97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c" gracePeriod=30 Jan 31 07:14:20 crc kubenswrapper[4687]: I0131 07:14:20.216807 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="7def500d-1af6-481b-b69e-6bd383df2252" containerName="glance-httpd" containerID="cri-o://feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4" gracePeriod=30 Jan 31 07:14:21 crc kubenswrapper[4687]: I0131 07:14:21.160091 4687 generic.go:334] "Generic (PLEG): container finished" podID="7def500d-1af6-481b-b69e-6bd383df2252" containerID="97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c" exitCode=143 Jan 31 07:14:21 crc kubenswrapper[4687]: I0131 07:14:21.160165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"7def500d-1af6-481b-b69e-6bd383df2252","Type":"ContainerDied","Data":"97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c"} Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.829235 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.926782 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.926891 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-httpd-run\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.926980 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927000 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-iscsi\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927035 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-dev\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927064 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-var-locks-brick\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927081 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-scripts\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927111 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-run\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927147 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvxwm\" (UniqueName: \"kubernetes.io/projected/7def500d-1af6-481b-b69e-6bd383df2252-kube-api-access-fvxwm\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927187 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-logs\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927220 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-lib-modules\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927252 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-config-data\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927270 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-sys\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927288 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-nvme\") pod \"7def500d-1af6-481b-b69e-6bd383df2252\" (UID: \"7def500d-1af6-481b-b69e-6bd383df2252\") " Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-run" (OuterVolumeSpecName: "run") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927612 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927658 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-dev" (OuterVolumeSpecName: "dev") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927688 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.927683 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.928021 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.929490 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-sys" (OuterVolumeSpecName: "sys") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.932870 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-logs" (OuterVolumeSpecName: "logs") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.936350 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.936532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7def500d-1af6-481b-b69e-6bd383df2252-kube-api-access-fvxwm" (OuterVolumeSpecName: "kube-api-access-fvxwm") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "kube-api-access-fvxwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.937352 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage15-crc" (OuterVolumeSpecName: "glance-cache") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "local-storage15-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.946201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-scripts" (OuterVolumeSpecName: "scripts") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.980792 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:23 crc kubenswrapper[4687]: I0131 07:14:23.991213 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-config-data" (OuterVolumeSpecName: "config-data") pod "7def500d-1af6-481b-b69e-6bd383df2252" (UID: "7def500d-1af6-481b-b69e-6bd383df2252"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028592 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028893 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028905 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028916 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvxwm\" (UniqueName: \"kubernetes.io/projected/7def500d-1af6-481b-b69e-6bd383df2252-kube-api-access-fvxwm\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028925 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028932 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028942 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7def500d-1af6-481b-b69e-6bd383df2252-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028951 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028959 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028989 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.028997 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7def500d-1af6-481b-b69e-6bd383df2252-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.029010 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.029018 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.029026 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7def500d-1af6-481b-b69e-6bd383df2252-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.042612 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage15-crc" (UniqueName: "kubernetes.io/local-volume/local-storage15-crc") on node "crc" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.042871 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130604 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-var-locks-brick\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130659 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-dev\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130735 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-scripts\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130838 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-run\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130889 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqgqr\" (UniqueName: \"kubernetes.io/projected/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-kube-api-access-nqgqr\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130922 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-logs\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130970 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-config-data\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.130995 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-lib-modules\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131012 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-dev" (OuterVolumeSpecName: "dev") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131021 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-sys\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131057 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131076 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-iscsi\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131112 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-nvme\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131152 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-httpd-run\") pod \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\" (UID: \"7d7bf209-cf34-4f0c-89ff-b1d92df146c0\") " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131591 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131613 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131625 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131635 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131874 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.131906 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-run" (OuterVolumeSpecName: "run") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.134657 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-scripts" (OuterVolumeSpecName: "scripts") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.134770 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.134812 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-sys" (OuterVolumeSpecName: "sys") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.135093 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-kube-api-access-nqgqr" (OuterVolumeSpecName: "kube-api-access-nqgqr") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "kube-api-access-nqgqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.135129 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-logs" (OuterVolumeSpecName: "logs") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.135826 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.135863 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.135884 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.137336 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance-cache") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.172675 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-config-data" (OuterVolumeSpecName: "config-data") pod "7d7bf209-cf34-4f0c-89ff-b1d92df146c0" (UID: "7d7bf209-cf34-4f0c-89ff-b1d92df146c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.188973 4687 generic.go:334] "Generic (PLEG): container finished" podID="7def500d-1af6-481b-b69e-6bd383df2252" containerID="feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4" exitCode=0 Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.189057 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"7def500d-1af6-481b-b69e-6bd383df2252","Type":"ContainerDied","Data":"feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4"} Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.189091 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"7def500d-1af6-481b-b69e-6bd383df2252","Type":"ContainerDied","Data":"64d423bd04c526f378c0aa9696384e6c8db5b98520e252d17c8f0fefee2d6e0a"} Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.189111 4687 scope.go:117] "RemoveContainer" containerID="feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.189260 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.197056 4687 generic.go:334] "Generic (PLEG): container finished" podID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerID="8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63" exitCode=0 Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.197106 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"7d7bf209-cf34-4f0c-89ff-b1d92df146c0","Type":"ContainerDied","Data":"8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63"} Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.197141 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"7d7bf209-cf34-4f0c-89ff-b1d92df146c0","Type":"ContainerDied","Data":"9304a720ab6c2249857d8bf9347dd6e27510bb964b833155cdff17dc766390a0"} Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.197138 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.223218 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.232446 4687 scope.go:117] "RemoveContainer" containerID="97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.233899 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234014 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234033 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234042 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234057 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234071 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234082 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234092 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqgqr\" (UniqueName: \"kubernetes.io/projected/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-kube-api-access-nqgqr\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234103 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234173 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234228 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.234239 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d7bf209-cf34-4f0c-89ff-b1d92df146c0-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.236897 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.253905 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.271218 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.276731 4687 scope.go:117] "RemoveContainer" containerID="feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.276891 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:24 crc kubenswrapper[4687]: E0131 07:14:24.279913 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4\": container with ID starting with feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4 not found: ID does not exist" containerID="feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.279991 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4"} err="failed to get container status \"feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4\": rpc error: code = NotFound desc = could not find container \"feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4\": container with ID starting with feb2512b046855fb9e03a7a4fce796d2acf4b967f68dfd616bcfd8035cc982c4 not found: ID does not exist" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.280022 4687 scope.go:117] "RemoveContainer" containerID="97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c" Jan 31 07:14:24 crc kubenswrapper[4687]: E0131 07:14:24.280550 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c\": container with ID starting with 97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c not found: ID does not exist" containerID="97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.280618 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c"} err="failed to get container status \"97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c\": rpc error: code = NotFound desc = could not find container \"97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c\": container with ID starting with 97f1a6ce38908c6f44637fe60f8c8e70462b2a60d93174af98b41589cd195f9c not found: ID does not exist" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.280647 4687 scope.go:117] "RemoveContainer" containerID="8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285378 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:24 crc kubenswrapper[4687]: E0131 07:14:24.285676 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-log" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285687 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-log" Jan 31 07:14:24 crc kubenswrapper[4687]: E0131 07:14:24.285698 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-httpd" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285703 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-httpd" Jan 31 07:14:24 crc kubenswrapper[4687]: E0131 07:14:24.285719 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7def500d-1af6-481b-b69e-6bd383df2252" containerName="glance-log" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285725 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7def500d-1af6-481b-b69e-6bd383df2252" containerName="glance-log" Jan 31 07:14:24 crc kubenswrapper[4687]: E0131 07:14:24.285737 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7def500d-1af6-481b-b69e-6bd383df2252" containerName="glance-httpd" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285742 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7def500d-1af6-481b-b69e-6bd383df2252" containerName="glance-httpd" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285871 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-log" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285886 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7def500d-1af6-481b-b69e-6bd383df2252" containerName="glance-httpd" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285897 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" containerName="glance-httpd" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.285906 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7def500d-1af6-481b-b69e-6bd383df2252" containerName="glance-log" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.286656 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.290625 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.301632 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.315279 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.316069 4687 scope.go:117] "RemoveContainer" containerID="957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.317133 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.322200 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.333355 4687 scope.go:117] "RemoveContainer" containerID="8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63" Jan 31 07:14:24 crc kubenswrapper[4687]: E0131 07:14:24.333886 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63\": container with ID starting with 8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63 not found: ID does not exist" containerID="8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.333927 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63"} err="failed to get container status \"8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63\": rpc error: code = NotFound desc = could not find container \"8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63\": container with ID starting with 8baa07c799e715eb695e0e5090c4141ff87bdab38d8f6d039633c9e4c31fbf63 not found: ID does not exist" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.333955 4687 scope.go:117] "RemoveContainer" containerID="957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a" Jan 31 07:14:24 crc kubenswrapper[4687]: E0131 07:14:24.334531 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a\": container with ID starting with 957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a not found: ID does not exist" containerID="957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.334563 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a"} err="failed to get container status \"957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a\": rpc error: code = NotFound desc = could not find container \"957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a\": container with ID starting with 957bb885f3a7200920e05229e18e9d7eba58dda79c4420bafe92260a6c04a66a not found: ID does not exist" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.336667 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.336696 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438186 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-run\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438236 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438271 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438290 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54jl5\" (UniqueName: \"kubernetes.io/projected/787ae24c-3f78-4d06-b797-e50650509346-kube-api-access-54jl5\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-dev\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-sys\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438372 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-logs\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438437 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438456 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438473 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438529 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-logs\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438582 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-scripts\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438602 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-scripts\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438618 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438635 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-dev\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438655 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-sys\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438690 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438713 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438961 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-config-data\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.438998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.439015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzdn\" (UniqueName: \"kubernetes.io/projected/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-kube-api-access-9gzdn\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.439040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.439088 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-run\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.439200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.439235 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-config-data\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-config-data\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540239 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-run\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540269 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540289 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54jl5\" (UniqueName: \"kubernetes.io/projected/787ae24c-3f78-4d06-b797-e50650509346-kube-api-access-54jl5\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540324 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-dev\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540340 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-sys\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540357 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-logs\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540381 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540398 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540503 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-logs\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-scripts\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540550 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-scripts\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540568 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540582 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-dev\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-sys\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540661 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540677 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-config-data\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540693 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540708 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540726 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzdn\" (UniqueName: \"kubernetes.io/projected/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-kube-api-access-9gzdn\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540761 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-run\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540783 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540845 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540936 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.540960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-logs\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.541431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.541474 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-run\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.541506 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.541540 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-dev\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.541815 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.541899 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542116 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-dev\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542191 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-sys\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542189 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-sys\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542229 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542291 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542357 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542446 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542559 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.542650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-logs\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.543290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.543344 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.543395 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-run\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.543562 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.546019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-config-data\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.547890 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-config-data\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.553967 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-scripts\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.554019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-scripts\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.562672 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.563918 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.564746 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54jl5\" (UniqueName: \"kubernetes.io/projected/787ae24c-3f78-4d06-b797-e50650509346-kube-api-access-54jl5\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.565631 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.579014 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.581908 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzdn\" (UniqueName: \"kubernetes.io/projected/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-kube-api-access-9gzdn\") pod \"glance-default-external-api-0\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.610160 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:24 crc kubenswrapper[4687]: I0131 07:14:24.633580 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:25 crc kubenswrapper[4687]: I0131 07:14:25.045805 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:14:25 crc kubenswrapper[4687]: I0131 07:14:25.117011 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:14:25 crc kubenswrapper[4687]: W0131 07:14:25.118019 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354c2b73_6b6c_4b19_b1e3_1bb8e221150a.slice/crio-6c6cab8daf41fc04d3ccfad7ae85e17637dfbdb2647f537e83bdd24027abd2fe WatchSource:0}: Error finding container 6c6cab8daf41fc04d3ccfad7ae85e17637dfbdb2647f537e83bdd24027abd2fe: Status 404 returned error can't find the container with id 6c6cab8daf41fc04d3ccfad7ae85e17637dfbdb2647f537e83bdd24027abd2fe Jan 31 07:14:25 crc kubenswrapper[4687]: I0131 07:14:25.210101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"354c2b73-6b6c-4b19-b1e3-1bb8e221150a","Type":"ContainerStarted","Data":"6c6cab8daf41fc04d3ccfad7ae85e17637dfbdb2647f537e83bdd24027abd2fe"} Jan 31 07:14:25 crc kubenswrapper[4687]: I0131 07:14:25.216333 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"787ae24c-3f78-4d06-b797-e50650509346","Type":"ContainerStarted","Data":"a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee"} Jan 31 07:14:25 crc kubenswrapper[4687]: I0131 07:14:25.216662 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"787ae24c-3f78-4d06-b797-e50650509346","Type":"ContainerStarted","Data":"632572b547459f3b6395df302dcb03a4260d6bdd0f17435d6a912d6116f11b8c"} Jan 31 07:14:25 crc kubenswrapper[4687]: I0131 07:14:25.611908 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7bf209-cf34-4f0c-89ff-b1d92df146c0" path="/var/lib/kubelet/pods/7d7bf209-cf34-4f0c-89ff-b1d92df146c0/volumes" Jan 31 07:14:25 crc kubenswrapper[4687]: I0131 07:14:25.613473 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7def500d-1af6-481b-b69e-6bd383df2252" path="/var/lib/kubelet/pods/7def500d-1af6-481b-b69e-6bd383df2252/volumes" Jan 31 07:14:26 crc kubenswrapper[4687]: I0131 07:14:26.237675 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"354c2b73-6b6c-4b19-b1e3-1bb8e221150a","Type":"ContainerStarted","Data":"f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99"} Jan 31 07:14:26 crc kubenswrapper[4687]: I0131 07:14:26.238007 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"354c2b73-6b6c-4b19-b1e3-1bb8e221150a","Type":"ContainerStarted","Data":"05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df"} Jan 31 07:14:26 crc kubenswrapper[4687]: I0131 07:14:26.240328 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"787ae24c-3f78-4d06-b797-e50650509346","Type":"ContainerStarted","Data":"3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0"} Jan 31 07:14:26 crc kubenswrapper[4687]: I0131 07:14:26.284746 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.284727126 podStartE2EDuration="2.284727126s" podCreationTimestamp="2026-01-31 07:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:14:26.264221306 +0000 UTC m=+1892.541480901" watchObservedRunningTime="2026-01-31 07:14:26.284727126 +0000 UTC m=+1892.561986701" Jan 31 07:14:26 crc kubenswrapper[4687]: I0131 07:14:26.285141 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.285134488 podStartE2EDuration="2.285134488s" podCreationTimestamp="2026-01-31 07:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:14:26.282586678 +0000 UTC m=+1892.559846263" watchObservedRunningTime="2026-01-31 07:14:26.285134488 +0000 UTC m=+1892.562394063" Jan 31 07:14:34 crc kubenswrapper[4687]: I0131 07:14:34.611542 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:34 crc kubenswrapper[4687]: I0131 07:14:34.613705 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:34 crc kubenswrapper[4687]: I0131 07:14:34.634496 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:34 crc kubenswrapper[4687]: I0131 07:14:34.634574 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:34 crc kubenswrapper[4687]: I0131 07:14:34.650141 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:34 crc kubenswrapper[4687]: I0131 07:14:34.654042 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:34 crc kubenswrapper[4687]: I0131 07:14:34.658773 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:34 crc kubenswrapper[4687]: I0131 07:14:34.688385 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:35 crc kubenswrapper[4687]: I0131 07:14:35.305310 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:35 crc kubenswrapper[4687]: I0131 07:14:35.305659 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:35 crc kubenswrapper[4687]: I0131 07:14:35.305672 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:35 crc kubenswrapper[4687]: I0131 07:14:35.305680 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:37 crc kubenswrapper[4687]: I0131 07:14:37.319687 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:14:37 crc kubenswrapper[4687]: I0131 07:14:37.320009 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:14:37 crc kubenswrapper[4687]: I0131 07:14:37.320837 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:14:37 crc kubenswrapper[4687]: I0131 07:14:37.320849 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:14:37 crc kubenswrapper[4687]: I0131 07:14:37.380728 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:37 crc kubenswrapper[4687]: I0131 07:14:37.385188 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:14:37 crc kubenswrapper[4687]: I0131 07:14:37.486324 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:14:37 crc kubenswrapper[4687]: I0131 07:14:37.488350 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.142309 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6"] Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.143921 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.146266 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.147624 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6"] Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.148987 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.231712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f8aef16-11a4-4972-8298-3efca57c1338-secret-volume\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.231772 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mndrb\" (UniqueName: \"kubernetes.io/projected/2f8aef16-11a4-4972-8298-3efca57c1338-kube-api-access-mndrb\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.231838 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f8aef16-11a4-4972-8298-3efca57c1338-config-volume\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.333680 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f8aef16-11a4-4972-8298-3efca57c1338-secret-volume\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.333777 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mndrb\" (UniqueName: \"kubernetes.io/projected/2f8aef16-11a4-4972-8298-3efca57c1338-kube-api-access-mndrb\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.333825 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f8aef16-11a4-4972-8298-3efca57c1338-config-volume\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.335066 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f8aef16-11a4-4972-8298-3efca57c1338-config-volume\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.340631 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f8aef16-11a4-4972-8298-3efca57c1338-secret-volume\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.353867 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mndrb\" (UniqueName: \"kubernetes.io/projected/2f8aef16-11a4-4972-8298-3efca57c1338-kube-api-access-mndrb\") pod \"collect-profiles-29497395-q62t6\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.470361 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:00 crc kubenswrapper[4687]: I0131 07:15:00.888638 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6"] Jan 31 07:15:01 crc kubenswrapper[4687]: I0131 07:15:01.510597 4687 generic.go:334] "Generic (PLEG): container finished" podID="2f8aef16-11a4-4972-8298-3efca57c1338" containerID="a7100dc1144f442a7d9d87c7f06255f0694c1e27fa6b62390ad2d30c05957bc4" exitCode=0 Jan 31 07:15:01 crc kubenswrapper[4687]: I0131 07:15:01.510667 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" event={"ID":"2f8aef16-11a4-4972-8298-3efca57c1338","Type":"ContainerDied","Data":"a7100dc1144f442a7d9d87c7f06255f0694c1e27fa6b62390ad2d30c05957bc4"} Jan 31 07:15:01 crc kubenswrapper[4687]: I0131 07:15:01.511021 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" event={"ID":"2f8aef16-11a4-4972-8298-3efca57c1338","Type":"ContainerStarted","Data":"419e4c2ccae21dab2962dea5b7fa4da7a412371725983182f601dc454dc67aff"} Jan 31 07:15:02 crc kubenswrapper[4687]: I0131 07:15:02.990291 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:02 crc kubenswrapper[4687]: I0131 07:15:02.993334 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f8aef16-11a4-4972-8298-3efca57c1338-config-volume\") pod \"2f8aef16-11a4-4972-8298-3efca57c1338\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " Jan 31 07:15:02 crc kubenswrapper[4687]: I0131 07:15:02.993428 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f8aef16-11a4-4972-8298-3efca57c1338-secret-volume\") pod \"2f8aef16-11a4-4972-8298-3efca57c1338\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " Jan 31 07:15:02 crc kubenswrapper[4687]: I0131 07:15:02.993456 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mndrb\" (UniqueName: \"kubernetes.io/projected/2f8aef16-11a4-4972-8298-3efca57c1338-kube-api-access-mndrb\") pod \"2f8aef16-11a4-4972-8298-3efca57c1338\" (UID: \"2f8aef16-11a4-4972-8298-3efca57c1338\") " Jan 31 07:15:02 crc kubenswrapper[4687]: I0131 07:15:02.994282 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8aef16-11a4-4972-8298-3efca57c1338-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f8aef16-11a4-4972-8298-3efca57c1338" (UID: "2f8aef16-11a4-4972-8298-3efca57c1338"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:03 crc kubenswrapper[4687]: I0131 07:15:03.002618 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8aef16-11a4-4972-8298-3efca57c1338-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f8aef16-11a4-4972-8298-3efca57c1338" (UID: "2f8aef16-11a4-4972-8298-3efca57c1338"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:03 crc kubenswrapper[4687]: I0131 07:15:03.002655 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8aef16-11a4-4972-8298-3efca57c1338-kube-api-access-mndrb" (OuterVolumeSpecName: "kube-api-access-mndrb") pod "2f8aef16-11a4-4972-8298-3efca57c1338" (UID: "2f8aef16-11a4-4972-8298-3efca57c1338"). InnerVolumeSpecName "kube-api-access-mndrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:03 crc kubenswrapper[4687]: I0131 07:15:03.094739 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f8aef16-11a4-4972-8298-3efca57c1338-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:03 crc kubenswrapper[4687]: I0131 07:15:03.094772 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f8aef16-11a4-4972-8298-3efca57c1338-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:03 crc kubenswrapper[4687]: I0131 07:15:03.094782 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mndrb\" (UniqueName: \"kubernetes.io/projected/2f8aef16-11a4-4972-8298-3efca57c1338-kube-api-access-mndrb\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:03 crc kubenswrapper[4687]: I0131 07:15:03.705488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" event={"ID":"2f8aef16-11a4-4972-8298-3efca57c1338","Type":"ContainerDied","Data":"419e4c2ccae21dab2962dea5b7fa4da7a412371725983182f601dc454dc67aff"} Jan 31 07:15:03 crc kubenswrapper[4687]: I0131 07:15:03.705788 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419e4c2ccae21dab2962dea5b7fa4da7a412371725983182f601dc454dc67aff" Jan 31 07:15:03 crc kubenswrapper[4687]: I0131 07:15:03.705552 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497395-q62t6" Jan 31 07:15:07 crc kubenswrapper[4687]: I0131 07:15:07.242525 4687 scope.go:117] "RemoveContainer" containerID="ed66e649a15f0fc6ad9b0c05104cfb0b1697da8d3a52b8eb932bc7cf80e0109a" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.535844 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lrz96"] Jan 31 07:15:22 crc kubenswrapper[4687]: E0131 07:15:22.536707 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8aef16-11a4-4972-8298-3efca57c1338" containerName="collect-profiles" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.536720 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8aef16-11a4-4972-8298-3efca57c1338" containerName="collect-profiles" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.536850 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8aef16-11a4-4972-8298-3efca57c1338" containerName="collect-profiles" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.537801 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.557504 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrz96"] Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.719216 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-catalog-content\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.719337 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-utilities\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.719430 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsnx\" (UniqueName: \"kubernetes.io/projected/b05210f4-4b71-4670-a1f5-e66c2cd1056c-kube-api-access-zvsnx\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.820523 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-catalog-content\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.820638 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-utilities\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.820707 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsnx\" (UniqueName: \"kubernetes.io/projected/b05210f4-4b71-4670-a1f5-e66c2cd1056c-kube-api-access-zvsnx\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.821239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-catalog-content\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.821296 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-utilities\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:22 crc kubenswrapper[4687]: I0131 07:15:22.859703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsnx\" (UniqueName: \"kubernetes.io/projected/b05210f4-4b71-4670-a1f5-e66c2cd1056c-kube-api-access-zvsnx\") pod \"certified-operators-lrz96\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.153562 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.389678 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.390356 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerName="glance-log" containerID="cri-o://044b087f1de9a148230e1198bf558c0aa8fe71f1ffb75b8d40f78b3c43f288d7" gracePeriod=30 Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.390905 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerName="glance-httpd" containerID="cri-o://0a543146aebf04be8a1e68d15aa1a9e28e0487231e42db01fb1425c4edac7936" gracePeriod=30 Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.514866 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrz96"] Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.644723 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.645002 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerName="glance-log" containerID="cri-o://92a1c548184fd98a8308bf19adae1ca910f7fadc76ee7fe6650a340855d405ff" gracePeriod=30 Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.645384 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerName="glance-httpd" containerID="cri-o://f4b34b1fa14b81512e9a2bb2b6de67d2f5f4aa403f74b9ca214266b2c2a9ab90" gracePeriod=30 Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.872616 4687 generic.go:334] "Generic (PLEG): container finished" podID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerID="044b087f1de9a148230e1198bf558c0aa8fe71f1ffb75b8d40f78b3c43f288d7" exitCode=143 Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.872700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"27fb68cd-53bd-4337-b199-605b7c23c33b","Type":"ContainerDied","Data":"044b087f1de9a148230e1198bf558c0aa8fe71f1ffb75b8d40f78b3c43f288d7"} Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.874638 4687 generic.go:334] "Generic (PLEG): container finished" podID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerID="84b779fb848b47d3a9ea56058532fd72d21dabeccf47f8703be9c7ad32429297" exitCode=0 Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.874687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz96" event={"ID":"b05210f4-4b71-4670-a1f5-e66c2cd1056c","Type":"ContainerDied","Data":"84b779fb848b47d3a9ea56058532fd72d21dabeccf47f8703be9c7ad32429297"} Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.874728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz96" event={"ID":"b05210f4-4b71-4670-a1f5-e66c2cd1056c","Type":"ContainerStarted","Data":"f31070bfabd920b17d093fdcc2a199b7d4d8031c08f0abe8711366e06ce61f29"} Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.876459 4687 generic.go:334] "Generic (PLEG): container finished" podID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerID="92a1c548184fd98a8308bf19adae1ca910f7fadc76ee7fe6650a340855d405ff" exitCode=143 Jan 31 07:15:23 crc kubenswrapper[4687]: I0131 07:15:23.876480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"a9f97349-9bfe-4c6e-bddb-a40db8f381b0","Type":"ContainerDied","Data":"92a1c548184fd98a8308bf19adae1ca910f7fadc76ee7fe6650a340855d405ff"} Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.718231 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-4wg52"] Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.729081 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-4wg52"] Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.845381 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.845619 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerName="glance-log" containerID="cri-o://05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df" gracePeriod=30 Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.845960 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerName="glance-httpd" containerID="cri-o://f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99" gracePeriod=30 Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.861281 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancebbaa-account-delete-8r5xh"] Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.862170 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.904025 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz96" event={"ID":"b05210f4-4b71-4670-a1f5-e66c2cd1056c","Type":"ContainerStarted","Data":"498dc096fcee3009739d17169c6d39d3d97dc62b2155c48280fc723c9846e685"} Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.953606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca37316-630c-44e6-ab4c-5beb44c545de-operator-scripts\") pod \"glancebbaa-account-delete-8r5xh\" (UID: \"5ca37316-630c-44e6-ab4c-5beb44c545de\") " pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:24 crc kubenswrapper[4687]: I0131 07:15:24.953659 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkk67\" (UniqueName: \"kubernetes.io/projected/5ca37316-630c-44e6-ab4c-5beb44c545de-kube-api-access-gkk67\") pod \"glancebbaa-account-delete-8r5xh\" (UID: \"5ca37316-630c-44e6-ab4c-5beb44c545de\") " pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.016481 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancebbaa-account-delete-8r5xh"] Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.023507 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.023745 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-log" containerID="cri-o://a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee" gracePeriod=30 Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.023875 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-httpd" containerID="cri-o://3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0" gracePeriod=30 Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.055972 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca37316-630c-44e6-ab4c-5beb44c545de-operator-scripts\") pod \"glancebbaa-account-delete-8r5xh\" (UID: \"5ca37316-630c-44e6-ab4c-5beb44c545de\") " pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.056021 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkk67\" (UniqueName: \"kubernetes.io/projected/5ca37316-630c-44e6-ab4c-5beb44c545de-kube-api-access-gkk67\") pod \"glancebbaa-account-delete-8r5xh\" (UID: \"5ca37316-630c-44e6-ab4c-5beb44c545de\") " pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.056589 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca37316-630c-44e6-ab4c-5beb44c545de-operator-scripts\") pod \"glancebbaa-account-delete-8r5xh\" (UID: \"5ca37316-630c-44e6-ab4c-5beb44c545de\") " pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.092831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkk67\" (UniqueName: \"kubernetes.io/projected/5ca37316-630c-44e6-ab4c-5beb44c545de-kube-api-access-gkk67\") pod \"glancebbaa-account-delete-8r5xh\" (UID: \"5ca37316-630c-44e6-ab4c-5beb44c545de\") " pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.098820 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.152:9292/healthcheck\": EOF" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.099202 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.152:9292/healthcheck\": EOF" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.195380 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.209149 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.152:9292/healthcheck\": EOF" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.612751 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309e722b-24cb-44b9-8afe-7c131a789fa5" path="/var/lib/kubelet/pods/309e722b-24cb-44b9-8afe-7c131a789fa5/volumes" Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.654706 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancebbaa-account-delete-8r5xh"] Jan 31 07:15:25 crc kubenswrapper[4687]: W0131 07:15:25.664941 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ca37316_630c_44e6_ab4c_5beb44c545de.slice/crio-499328a883b658d0c015b594bb468df728a817b7fad2c240e1c04379943ba120 WatchSource:0}: Error finding container 499328a883b658d0c015b594bb468df728a817b7fad2c240e1c04379943ba120: Status 404 returned error can't find the container with id 499328a883b658d0c015b594bb468df728a817b7fad2c240e1c04379943ba120 Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.914549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" event={"ID":"5ca37316-630c-44e6-ab4c-5beb44c545de","Type":"ContainerStarted","Data":"90fd08432e434333002676c2a6c96027767e78106a0217fd0ac1f8dba86d32ed"} Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.914839 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" event={"ID":"5ca37316-630c-44e6-ab4c-5beb44c545de","Type":"ContainerStarted","Data":"499328a883b658d0c015b594bb468df728a817b7fad2c240e1c04379943ba120"} Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.918084 4687 generic.go:334] "Generic (PLEG): container finished" podID="787ae24c-3f78-4d06-b797-e50650509346" containerID="a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee" exitCode=143 Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.918107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"787ae24c-3f78-4d06-b797-e50650509346","Type":"ContainerDied","Data":"a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee"} Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.920693 4687 generic.go:334] "Generic (PLEG): container finished" podID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerID="498dc096fcee3009739d17169c6d39d3d97dc62b2155c48280fc723c9846e685" exitCode=0 Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.920780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz96" event={"ID":"b05210f4-4b71-4670-a1f5-e66c2cd1056c","Type":"ContainerDied","Data":"498dc096fcee3009739d17169c6d39d3d97dc62b2155c48280fc723c9846e685"} Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.922247 4687 generic.go:334] "Generic (PLEG): container finished" podID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerID="05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df" exitCode=143 Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.922283 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"354c2b73-6b6c-4b19-b1e3-1bb8e221150a","Type":"ContainerDied","Data":"05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df"} Jan 31 07:15:25 crc kubenswrapper[4687]: I0131 07:15:25.944224 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" podStartSLOduration=1.944198894 podStartE2EDuration="1.944198894s" podCreationTimestamp="2026-01-31 07:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:15:25.93890623 +0000 UTC m=+1952.216165805" watchObservedRunningTime="2026-01-31 07:15:25.944198894 +0000 UTC m=+1952.221458469" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.540596 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-thb94"] Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.542661 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.558765 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thb94"] Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.683925 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdlfj\" (UniqueName: \"kubernetes.io/projected/9381a705-2254-4dc0-84c4-5b510d03ae92-kube-api-access-wdlfj\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.683996 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-catalog-content\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.684147 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-utilities\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.785335 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdlfj\" (UniqueName: \"kubernetes.io/projected/9381a705-2254-4dc0-84c4-5b510d03ae92-kube-api-access-wdlfj\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.785480 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-catalog-content\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.785583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-utilities\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.786639 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-utilities\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.786706 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-catalog-content\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.812173 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdlfj\" (UniqueName: \"kubernetes.io/projected/9381a705-2254-4dc0-84c4-5b510d03ae92-kube-api-access-wdlfj\") pod \"redhat-operators-thb94\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.867400 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.992805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz96" event={"ID":"b05210f4-4b71-4670-a1f5-e66c2cd1056c","Type":"ContainerStarted","Data":"9d1baebc8d16e31d266deb497f79ed1896e324601b94fcdd4232f60a8af19cb1"} Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.995678 4687 generic.go:334] "Generic (PLEG): container finished" podID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerID="f4b34b1fa14b81512e9a2bb2b6de67d2f5f4aa403f74b9ca214266b2c2a9ab90" exitCode=0 Jan 31 07:15:26 crc kubenswrapper[4687]: I0131 07:15:26.995746 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"a9f97349-9bfe-4c6e-bddb-a40db8f381b0","Type":"ContainerDied","Data":"f4b34b1fa14b81512e9a2bb2b6de67d2f5f4aa403f74b9ca214266b2c2a9ab90"} Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.001863 4687 generic.go:334] "Generic (PLEG): container finished" podID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerID="0a543146aebf04be8a1e68d15aa1a9e28e0487231e42db01fb1425c4edac7936" exitCode=0 Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.003195 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"27fb68cd-53bd-4337-b199-605b7c23c33b","Type":"ContainerDied","Data":"0a543146aebf04be8a1e68d15aa1a9e28e0487231e42db01fb1425c4edac7936"} Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.004842 4687 generic.go:334] "Generic (PLEG): container finished" podID="5ca37316-630c-44e6-ab4c-5beb44c545de" containerID="90fd08432e434333002676c2a6c96027767e78106a0217fd0ac1f8dba86d32ed" exitCode=0 Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.004870 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" event={"ID":"5ca37316-630c-44e6-ab4c-5beb44c545de","Type":"ContainerDied","Data":"90fd08432e434333002676c2a6c96027767e78106a0217fd0ac1f8dba86d32ed"} Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.026232 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lrz96" podStartSLOduration=2.537770134 podStartE2EDuration="5.026214933s" podCreationTimestamp="2026-01-31 07:15:22 +0000 UTC" firstStartedPulling="2026-01-31 07:15:23.876812848 +0000 UTC m=+1950.154072423" lastFinishedPulling="2026-01-31 07:15:26.365257647 +0000 UTC m=+1952.642517222" observedRunningTime="2026-01-31 07:15:27.022111751 +0000 UTC m=+1953.299371336" watchObservedRunningTime="2026-01-31 07:15:27.026214933 +0000 UTC m=+1953.303474508" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.195177 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294452 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294523 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-httpd-run\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294577 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-config-data\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294645 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294676 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-nvme\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294695 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-sys\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294715 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-scripts\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294770 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-var-locks-brick\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294837 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b85w7\" (UniqueName: \"kubernetes.io/projected/27fb68cd-53bd-4337-b199-605b7c23c33b-kube-api-access-b85w7\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294868 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-logs\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294881 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-lib-modules\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294898 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-iscsi\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294928 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-run\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.294956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-dev\") pod \"27fb68cd-53bd-4337-b199-605b7c23c33b\" (UID: \"27fb68cd-53bd-4337-b199-605b7c23c33b\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.295454 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.295531 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.295575 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.295599 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-run" (OuterVolumeSpecName: "run") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.295618 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-dev" (OuterVolumeSpecName: "dev") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.295803 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-logs" (OuterVolumeSpecName: "logs") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.295901 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.296897 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.297099 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-sys" (OuterVolumeSpecName: "sys") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.300585 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance-cache") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.301571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.301730 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27fb68cd-53bd-4337-b199-605b7c23c33b-kube-api-access-b85w7" (OuterVolumeSpecName: "kube-api-access-b85w7") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "kube-api-access-b85w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.302851 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-scripts" (OuterVolumeSpecName: "scripts") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.334395 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-config-data" (OuterVolumeSpecName: "config-data") pod "27fb68cd-53bd-4337-b199-605b7c23c33b" (UID: "27fb68cd-53bd-4337-b199-605b7c23c33b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397229 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397301 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397316 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397329 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397341 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397381 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b85w7\" (UniqueName: \"kubernetes.io/projected/27fb68cd-53bd-4337-b199-605b7c23c33b-kube-api-access-b85w7\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397394 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397438 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397452 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397463 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397479 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27fb68cd-53bd-4337-b199-605b7c23c33b-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397529 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397545 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27fb68cd-53bd-4337-b199-605b7c23c33b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.397557 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fb68cd-53bd-4337-b199-605b7c23c33b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.398448 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.415349 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.415445 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.498612 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.498958 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499013 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-584gn\" (UniqueName: \"kubernetes.io/projected/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-kube-api-access-584gn\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499091 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-logs\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499136 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-run\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499171 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-scripts\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499224 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-var-locks-brick\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499276 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-config-data\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499309 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-dev\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499354 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-httpd-run\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499454 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-lib-modules\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499481 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-sys\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499505 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-dev" (OuterVolumeSpecName: "dev") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499529 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-run" (OuterVolumeSpecName: "run") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499604 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-sys" (OuterVolumeSpecName: "sys") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499514 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-iscsi\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499604 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499635 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-nvme\") pod \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\" (UID: \"a9f97349-9bfe-4c6e-bddb-a40db8f381b0\") " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499869 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.499905 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500107 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500126 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500139 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500152 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500165 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500176 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500187 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500198 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500210 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.500221 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.501915 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-logs" (OuterVolumeSpecName: "logs") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.504364 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-kube-api-access-584gn" (OuterVolumeSpecName: "kube-api-access-584gn") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "kube-api-access-584gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.505170 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.505336 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.508028 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-scripts" (OuterVolumeSpecName: "scripts") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.518519 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thb94"] Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.551784 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-config-data" (OuterVolumeSpecName: "config-data") pod "a9f97349-9bfe-4c6e-bddb-a40db8f381b0" (UID: "a9f97349-9bfe-4c6e-bddb-a40db8f381b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.601815 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.601847 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.601859 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-584gn\" (UniqueName: \"kubernetes.io/projected/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-kube-api-access-584gn\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.601874 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.601882 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.601893 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f97349-9bfe-4c6e-bddb-a40db8f381b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.617484 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.617493 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.702843 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:27 crc kubenswrapper[4687]: I0131 07:15:27.702878 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.022780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"a9f97349-9bfe-4c6e-bddb-a40db8f381b0","Type":"ContainerDied","Data":"91b7b4ad74a05c18cc24ac9640d25a107c79807b73ea59fca60c2551b3889a8a"} Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.022852 4687 scope.go:117] "RemoveContainer" containerID="f4b34b1fa14b81512e9a2bb2b6de67d2f5f4aa403f74b9ca214266b2c2a9ab90" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.023115 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.035248 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.035460 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"27fb68cd-53bd-4337-b199-605b7c23c33b","Type":"ContainerDied","Data":"44b9ecc73876a5faac23e7ab0fc7f91342f51c518fdc7bce3093ef6b7a66eeda"} Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.038698 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb94" event={"ID":"9381a705-2254-4dc0-84c4-5b510d03ae92","Type":"ContainerDied","Data":"5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd"} Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.038767 4687 generic.go:334] "Generic (PLEG): container finished" podID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerID="5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd" exitCode=0 Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.038860 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb94" event={"ID":"9381a705-2254-4dc0-84c4-5b510d03ae92","Type":"ContainerStarted","Data":"841402456dcae40823f7ec62bad632750ddea3b8ac32dfac2d63629c913889bc"} Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.059753 4687 scope.go:117] "RemoveContainer" containerID="92a1c548184fd98a8308bf19adae1ca910f7fadc76ee7fe6650a340855d405ff" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.060522 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.082235 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.091961 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-1184-account-create-update-jk5qs"] Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.102510 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-create-qnsvw"] Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.102903 4687 scope.go:117] "RemoveContainer" containerID="0a543146aebf04be8a1e68d15aa1a9e28e0487231e42db01fb1425c4edac7936" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.110926 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-create-qnsvw"] Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.119770 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-1184-account-create-update-jk5qs"] Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.125921 4687 scope.go:117] "RemoveContainer" containerID="044b087f1de9a148230e1198bf558c0aa8fe71f1ffb75b8d40f78b3c43f288d7" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.134297 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.139977 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.508098 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.616038 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca37316-630c-44e6-ab4c-5beb44c545de-operator-scripts\") pod \"5ca37316-630c-44e6-ab4c-5beb44c545de\" (UID: \"5ca37316-630c-44e6-ab4c-5beb44c545de\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.616143 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkk67\" (UniqueName: \"kubernetes.io/projected/5ca37316-630c-44e6-ab4c-5beb44c545de-kube-api-access-gkk67\") pod \"5ca37316-630c-44e6-ab4c-5beb44c545de\" (UID: \"5ca37316-630c-44e6-ab4c-5beb44c545de\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.617927 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca37316-630c-44e6-ab4c-5beb44c545de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ca37316-630c-44e6-ab4c-5beb44c545de" (UID: "5ca37316-630c-44e6-ab4c-5beb44c545de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.625781 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca37316-630c-44e6-ab4c-5beb44c545de-kube-api-access-gkk67" (OuterVolumeSpecName: "kube-api-access-gkk67") pod "5ca37316-630c-44e6-ab4c-5beb44c545de" (UID: "5ca37316-630c-44e6-ab4c-5beb44c545de"). InnerVolumeSpecName "kube-api-access-gkk67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.720566 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkk67\" (UniqueName: \"kubernetes.io/projected/5ca37316-630c-44e6-ab4c-5beb44c545de-kube-api-access-gkk67\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.720610 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca37316-630c-44e6-ab4c-5beb44c545de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.755274 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924248 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924333 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-sys\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924358 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-config-data\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924426 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924455 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-scripts\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924505 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-dev\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924515 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-sys" (OuterVolumeSpecName: "sys") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924541 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-lib-modules\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924583 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924632 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-logs\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-iscsi\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924705 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-httpd-run\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924758 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-var-locks-brick\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924813 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-run\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924848 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gzdn\" (UniqueName: \"kubernetes.io/projected/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-kube-api-access-9gzdn\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924878 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-nvme\") pod \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\" (UID: \"354c2b73-6b6c-4b19-b1e3-1bb8e221150a\") " Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.925491 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.925538 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.925550 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924801 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-dev" (OuterVolumeSpecName: "dev") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.924872 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-run" (OuterVolumeSpecName: "run") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.925609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.927873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-logs" (OuterVolumeSpecName: "logs") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.927956 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.930790 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.930880 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance-cache") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.930985 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-kube-api-access-9gzdn" (OuterVolumeSpecName: "kube-api-access-9gzdn") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "kube-api-access-9gzdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.931583 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-scripts" (OuterVolumeSpecName: "scripts") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:28 crc kubenswrapper[4687]: I0131 07:15:28.981612 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-config-data" (OuterVolumeSpecName: "config-data") pod "354c2b73-6b6c-4b19-b1e3-1bb8e221150a" (UID: "354c2b73-6b6c-4b19-b1e3-1bb8e221150a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.026896 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.026967 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.026978 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.026987 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.026997 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.027006 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.027017 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gzdn\" (UniqueName: \"kubernetes.io/projected/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-kube-api-access-9gzdn\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.027026 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.027061 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.027071 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354c2b73-6b6c-4b19-b1e3-1bb8e221150a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.027083 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.042056 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.044239 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.049473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb94" event={"ID":"9381a705-2254-4dc0-84c4-5b510d03ae92","Type":"ContainerStarted","Data":"c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d"} Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.051105 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" event={"ID":"5ca37316-630c-44e6-ab4c-5beb44c545de","Type":"ContainerDied","Data":"499328a883b658d0c015b594bb468df728a817b7fad2c240e1c04379943ba120"} Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.051146 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="499328a883b658d0c015b594bb468df728a817b7fad2c240e1c04379943ba120" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.051196 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebbaa-account-delete-8r5xh" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.054481 4687 generic.go:334] "Generic (PLEG): container finished" podID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerID="f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99" exitCode=0 Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.054527 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"354c2b73-6b6c-4b19-b1e3-1bb8e221150a","Type":"ContainerDied","Data":"f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99"} Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.054566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"354c2b73-6b6c-4b19-b1e3-1bb8e221150a","Type":"ContainerDied","Data":"6c6cab8daf41fc04d3ccfad7ae85e17637dfbdb2647f537e83bdd24027abd2fe"} Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.054584 4687 scope.go:117] "RemoveContainer" containerID="f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.054576 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.080600 4687 scope.go:117] "RemoveContainer" containerID="05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.110031 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.122344 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.122785 4687 scope.go:117] "RemoveContainer" containerID="f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99" Jan 31 07:15:29 crc kubenswrapper[4687]: E0131 07:15:29.125985 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99\": container with ID starting with f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99 not found: ID does not exist" containerID="f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.126047 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99"} err="failed to get container status \"f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99\": rpc error: code = NotFound desc = could not find container \"f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99\": container with ID starting with f957cfcfdf342f9739752e920898124910043b2ff6c3de218c0a44305ce2ad99 not found: ID does not exist" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.126085 4687 scope.go:117] "RemoveContainer" containerID="05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df" Jan 31 07:15:29 crc kubenswrapper[4687]: E0131 07:15:29.126859 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df\": container with ID starting with 05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df not found: ID does not exist" containerID="05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.126926 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df"} err="failed to get container status \"05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df\": rpc error: code = NotFound desc = could not find container \"05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df\": container with ID starting with 05e94f66091700e0b1eafa01e8e56f2630a4557a5d71fb87e17be17c703e64df not found: ID does not exist" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.131362 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.131435 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.612059 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" path="/var/lib/kubelet/pods/27fb68cd-53bd-4337-b199-605b7c23c33b/volumes" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.612824 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9" path="/var/lib/kubelet/pods/2805bcaf-4eb4-4cd7-89fd-62d3a45abcf9/volumes" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.613376 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" path="/var/lib/kubelet/pods/354c2b73-6b6c-4b19-b1e3-1bb8e221150a/volumes" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.614397 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d" path="/var/lib/kubelet/pods/a0b43f28-b08f-4e18-b8bd-d5950c5a9b9d/volumes" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.614968 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" path="/var/lib/kubelet/pods/a9f97349-9bfe-4c6e-bddb-a40db8f381b0/volumes" Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.864453 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-844df"] Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.870635 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-844df"] Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.887743 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancebbaa-account-delete-8r5xh"] Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.892659 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5"] Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.897229 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-bbaa-account-create-update-wlcr5"] Jan 31 07:15:29 crc kubenswrapper[4687]: I0131 07:15:29.902049 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancebbaa-account-delete-8r5xh"] Jan 31 07:15:30 crc kubenswrapper[4687]: I0131 07:15:30.065695 4687 generic.go:334] "Generic (PLEG): container finished" podID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerID="c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d" exitCode=0 Jan 31 07:15:30 crc kubenswrapper[4687]: I0131 07:15:30.065757 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb94" event={"ID":"9381a705-2254-4dc0-84c4-5b510d03ae92","Type":"ContainerDied","Data":"c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d"} Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.078088 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb94" event={"ID":"9381a705-2254-4dc0-84c4-5b510d03ae92","Type":"ContainerStarted","Data":"9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c"} Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.097854 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-thb94" podStartSLOduration=2.6737134190000003 podStartE2EDuration="5.097833382s" podCreationTimestamp="2026-01-31 07:15:26 +0000 UTC" firstStartedPulling="2026-01-31 07:15:28.059057707 +0000 UTC m=+1954.336317282" lastFinishedPulling="2026-01-31 07:15:30.48317767 +0000 UTC m=+1956.760437245" observedRunningTime="2026-01-31 07:15:31.094361977 +0000 UTC m=+1957.371621562" watchObservedRunningTime="2026-01-31 07:15:31.097833382 +0000 UTC m=+1957.375092977" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.613219 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08143980-4935-4851-b898-5b47179db36e" path="/var/lib/kubelet/pods/08143980-4935-4851-b898-5b47179db36e/volumes" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.614785 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca37316-630c-44e6-ab4c-5beb44c545de" path="/var/lib/kubelet/pods/5ca37316-630c-44e6-ab4c-5beb44c545de/volumes" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.615625 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80d27888-0b55-47a9-9e0a-6743273844e5" path="/var/lib/kubelet/pods/80d27888-0b55-47a9-9e0a-6743273844e5/volumes" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.891588 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-nvme\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976622 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-lib-modules\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976668 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976695 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-iscsi\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976738 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976792 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976828 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976851 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-dev\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976889 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-dev" (OuterVolumeSpecName: "dev") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976920 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-config-data\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.976958 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-var-locks-brick\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.977097 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-httpd-run\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.977176 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.978304 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.978353 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.978399 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-scripts\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.978872 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-logs\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.978916 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-sys\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.978953 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-run\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979020 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54jl5\" (UniqueName: \"kubernetes.io/projected/787ae24c-3f78-4d06-b797-e50650509346-kube-api-access-54jl5\") pod \"787ae24c-3f78-4d06-b797-e50650509346\" (UID: \"787ae24c-3f78-4d06-b797-e50650509346\") " Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979146 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-sys" (OuterVolumeSpecName: "sys") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979489 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-run" (OuterVolumeSpecName: "run") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979573 4687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979590 4687 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979600 4687 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-dev\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979612 4687 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979625 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979638 4687 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-sys\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979648 4687 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.979669 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-logs" (OuterVolumeSpecName: "logs") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.982159 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.982310 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787ae24c-3f78-4d06-b797-e50650509346-kube-api-access-54jl5" (OuterVolumeSpecName: "kube-api-access-54jl5") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "kube-api-access-54jl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.982500 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage15-crc" (OuterVolumeSpecName: "glance-cache") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "local-storage15-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:31 crc kubenswrapper[4687]: I0131 07:15:31.986779 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-scripts" (OuterVolumeSpecName: "scripts") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.022829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-config-data" (OuterVolumeSpecName: "config-data") pod "787ae24c-3f78-4d06-b797-e50650509346" (UID: "787ae24c-3f78-4d06-b797-e50650509346"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.081104 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.081145 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.081169 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" " Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.081185 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ae24c-3f78-4d06-b797-e50650509346-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.081197 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787ae24c-3f78-4d06-b797-e50650509346-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.081209 4687 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/787ae24c-3f78-4d06-b797-e50650509346-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.081221 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54jl5\" (UniqueName: \"kubernetes.io/projected/787ae24c-3f78-4d06-b797-e50650509346-kube-api-access-54jl5\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.092559 4687 generic.go:334] "Generic (PLEG): container finished" podID="787ae24c-3f78-4d06-b797-e50650509346" containerID="3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0" exitCode=0 Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.092637 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.092657 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"787ae24c-3f78-4d06-b797-e50650509346","Type":"ContainerDied","Data":"3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0"} Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.092701 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"787ae24c-3f78-4d06-b797-e50650509346","Type":"ContainerDied","Data":"632572b547459f3b6395df302dcb03a4260d6bdd0f17435d6a912d6116f11b8c"} Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.092721 4687 scope.go:117] "RemoveContainer" containerID="3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.097944 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage15-crc" (UniqueName: "kubernetes.io/local-volume/local-storage15-crc") on node "crc" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.098138 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.135755 4687 scope.go:117] "RemoveContainer" containerID="a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.142120 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.147294 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.158814 4687 scope.go:117] "RemoveContainer" containerID="3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0" Jan 31 07:15:32 crc kubenswrapper[4687]: E0131 07:15:32.159689 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0\": container with ID starting with 3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0 not found: ID does not exist" containerID="3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.159731 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0"} err="failed to get container status \"3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0\": rpc error: code = NotFound desc = could not find container \"3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0\": container with ID starting with 3e36cd32b9ad9b5ce0d4a234c68edb3099f4a2ddc3a14fa50f2cb4a79f45d5c0 not found: ID does not exist" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.159759 4687 scope.go:117] "RemoveContainer" containerID="a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee" Jan 31 07:15:32 crc kubenswrapper[4687]: E0131 07:15:32.160083 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee\": container with ID starting with a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee not found: ID does not exist" containerID="a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.160136 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee"} err="failed to get container status \"a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee\": rpc error: code = NotFound desc = could not find container \"a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee\": container with ID starting with a07238fab77e2b246e30650c672a3563683cc58d5005da94cb6a3f6566c697ee not found: ID does not exist" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.208285 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:32 crc kubenswrapper[4687]: I0131 07:15:32.208321 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:33 crc kubenswrapper[4687]: I0131 07:15:33.155200 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:33 crc kubenswrapper[4687]: I0131 07:15:33.155539 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:33 crc kubenswrapper[4687]: I0131 07:15:33.208764 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:33 crc kubenswrapper[4687]: I0131 07:15:33.615350 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787ae24c-3f78-4d06-b797-e50650509346" path="/var/lib/kubelet/pods/787ae24c-3f78-4d06-b797-e50650509346/volumes" Jan 31 07:15:34 crc kubenswrapper[4687]: I0131 07:15:34.183710 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:34 crc kubenswrapper[4687]: I0131 07:15:34.728816 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrz96"] Jan 31 07:15:36 crc kubenswrapper[4687]: I0131 07:15:36.150353 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lrz96" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerName="registry-server" containerID="cri-o://9d1baebc8d16e31d266deb497f79ed1896e324601b94fcdd4232f60a8af19cb1" gracePeriod=2 Jan 31 07:15:36 crc kubenswrapper[4687]: I0131 07:15:36.868015 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:36 crc kubenswrapper[4687]: I0131 07:15:36.868378 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:36 crc kubenswrapper[4687]: I0131 07:15:36.912357 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.159594 4687 generic.go:334] "Generic (PLEG): container finished" podID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerID="9d1baebc8d16e31d266deb497f79ed1896e324601b94fcdd4232f60a8af19cb1" exitCode=0 Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.159731 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz96" event={"ID":"b05210f4-4b71-4670-a1f5-e66c2cd1056c","Type":"ContainerDied","Data":"9d1baebc8d16e31d266deb497f79ed1896e324601b94fcdd4232f60a8af19cb1"} Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.206773 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.644118 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.782537 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-catalog-content\") pod \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.782924 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvsnx\" (UniqueName: \"kubernetes.io/projected/b05210f4-4b71-4670-a1f5-e66c2cd1056c-kube-api-access-zvsnx\") pod \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.782989 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-utilities\") pod \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\" (UID: \"b05210f4-4b71-4670-a1f5-e66c2cd1056c\") " Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.783740 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-utilities" (OuterVolumeSpecName: "utilities") pod "b05210f4-4b71-4670-a1f5-e66c2cd1056c" (UID: "b05210f4-4b71-4670-a1f5-e66c2cd1056c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.797563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05210f4-4b71-4670-a1f5-e66c2cd1056c-kube-api-access-zvsnx" (OuterVolumeSpecName: "kube-api-access-zvsnx") pod "b05210f4-4b71-4670-a1f5-e66c2cd1056c" (UID: "b05210f4-4b71-4670-a1f5-e66c2cd1056c"). InnerVolumeSpecName "kube-api-access-zvsnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.975377 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvsnx\" (UniqueName: \"kubernetes.io/projected/b05210f4-4b71-4670-a1f5-e66c2cd1056c-kube-api-access-zvsnx\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:37 crc kubenswrapper[4687]: I0131 07:15:37.975470 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.003220 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b05210f4-4b71-4670-a1f5-e66c2cd1056c" (UID: "b05210f4-4b71-4670-a1f5-e66c2cd1056c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.076606 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05210f4-4b71-4670-a1f5-e66c2cd1056c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.170563 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz96" Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.173462 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz96" event={"ID":"b05210f4-4b71-4670-a1f5-e66c2cd1056c","Type":"ContainerDied","Data":"f31070bfabd920b17d093fdcc2a199b7d4d8031c08f0abe8711366e06ce61f29"} Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.173507 4687 scope.go:117] "RemoveContainer" containerID="9d1baebc8d16e31d266deb497f79ed1896e324601b94fcdd4232f60a8af19cb1" Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.200951 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrz96"] Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.207082 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lrz96"] Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.207825 4687 scope.go:117] "RemoveContainer" containerID="498dc096fcee3009739d17169c6d39d3d97dc62b2155c48280fc723c9846e685" Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.228607 4687 scope.go:117] "RemoveContainer" containerID="84b779fb848b47d3a9ea56058532fd72d21dabeccf47f8703be9c7ad32429297" Jan 31 07:15:38 crc kubenswrapper[4687]: I0131 07:15:38.531156 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thb94"] Jan 31 07:15:39 crc kubenswrapper[4687]: I0131 07:15:39.178026 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-thb94" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerName="registry-server" containerID="cri-o://9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c" gracePeriod=2 Jan 31 07:15:39 crc kubenswrapper[4687]: I0131 07:15:39.612870 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" path="/var/lib/kubelet/pods/b05210f4-4b71-4670-a1f5-e66c2cd1056c/volumes" Jan 31 07:15:39 crc kubenswrapper[4687]: I0131 07:15:39.924267 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.001866 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-utilities\") pod \"9381a705-2254-4dc0-84c4-5b510d03ae92\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.001972 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdlfj\" (UniqueName: \"kubernetes.io/projected/9381a705-2254-4dc0-84c4-5b510d03ae92-kube-api-access-wdlfj\") pod \"9381a705-2254-4dc0-84c4-5b510d03ae92\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.002136 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-catalog-content\") pod \"9381a705-2254-4dc0-84c4-5b510d03ae92\" (UID: \"9381a705-2254-4dc0-84c4-5b510d03ae92\") " Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.002981 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-utilities" (OuterVolumeSpecName: "utilities") pod "9381a705-2254-4dc0-84c4-5b510d03ae92" (UID: "9381a705-2254-4dc0-84c4-5b510d03ae92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.008804 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9381a705-2254-4dc0-84c4-5b510d03ae92-kube-api-access-wdlfj" (OuterVolumeSpecName: "kube-api-access-wdlfj") pod "9381a705-2254-4dc0-84c4-5b510d03ae92" (UID: "9381a705-2254-4dc0-84c4-5b510d03ae92"). InnerVolumeSpecName "kube-api-access-wdlfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.103686 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdlfj\" (UniqueName: \"kubernetes.io/projected/9381a705-2254-4dc0-84c4-5b510d03ae92-kube-api-access-wdlfj\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.103749 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.187791 4687 generic.go:334] "Generic (PLEG): container finished" podID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerID="9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c" exitCode=0 Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.187840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb94" event={"ID":"9381a705-2254-4dc0-84c4-5b510d03ae92","Type":"ContainerDied","Data":"9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c"} Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.187875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb94" event={"ID":"9381a705-2254-4dc0-84c4-5b510d03ae92","Type":"ContainerDied","Data":"841402456dcae40823f7ec62bad632750ddea3b8ac32dfac2d63629c913889bc"} Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.187878 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thb94" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.187912 4687 scope.go:117] "RemoveContainer" containerID="9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.217594 4687 scope.go:117] "RemoveContainer" containerID="c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.240964 4687 scope.go:117] "RemoveContainer" containerID="5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.264670 4687 scope.go:117] "RemoveContainer" containerID="9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c" Jan 31 07:15:40 crc kubenswrapper[4687]: E0131 07:15:40.265447 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c\": container with ID starting with 9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c not found: ID does not exist" containerID="9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.265485 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c"} err="failed to get container status \"9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c\": rpc error: code = NotFound desc = could not find container \"9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c\": container with ID starting with 9e2002bc4c0cd7e0093f9baba4dc1749bfb7a26bc8de987e9116bd6a49b0d76c not found: ID does not exist" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.265506 4687 scope.go:117] "RemoveContainer" containerID="c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d" Jan 31 07:15:40 crc kubenswrapper[4687]: E0131 07:15:40.265909 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d\": container with ID starting with c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d not found: ID does not exist" containerID="c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.265945 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d"} err="failed to get container status \"c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d\": rpc error: code = NotFound desc = could not find container \"c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d\": container with ID starting with c2388284e9d13ed0b8fa5d0fdca6c9f69dbbfe2a4dd7b2aab33b7ae07c3bf68d not found: ID does not exist" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.265985 4687 scope.go:117] "RemoveContainer" containerID="5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd" Jan 31 07:15:40 crc kubenswrapper[4687]: E0131 07:15:40.266282 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd\": container with ID starting with 5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd not found: ID does not exist" containerID="5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd" Jan 31 07:15:40 crc kubenswrapper[4687]: I0131 07:15:40.266325 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd"} err="failed to get container status \"5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd\": rpc error: code = NotFound desc = could not find container \"5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd\": container with ID starting with 5e6fbdfb5cb5bb43a081fb946d61daa24bae127297b798d3bc65332db176acdd not found: ID does not exist" Jan 31 07:15:42 crc kubenswrapper[4687]: I0131 07:15:42.031629 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9381a705-2254-4dc0-84c4-5b510d03ae92" (UID: "9381a705-2254-4dc0-84c4-5b510d03ae92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:42 crc kubenswrapper[4687]: I0131 07:15:42.034168 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381a705-2254-4dc0-84c4-5b510d03ae92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:42 crc kubenswrapper[4687]: I0131 07:15:42.322169 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thb94"] Jan 31 07:15:42 crc kubenswrapper[4687]: I0131 07:15:42.330386 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-thb94"] Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.613191 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" path="/var/lib/kubelet/pods/9381a705-2254-4dc0-84c4-5b510d03ae92/volumes" Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.762451 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-cdxqh"] Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.769503 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-cdxqh"] Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.775842 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776396 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-server" containerID="cri-o://57255eff28aadc0f504b048b696e5785a65bddda1c04167b42793b0ae630f5f8" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776465 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-auditor" containerID="cri-o://30c8a9046e479dd3d4719b5b38bd785ecc1a69005467729281cf8324e096a6d8" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776493 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-updater" containerID="cri-o://29971351b38387c34c20fe50e6de67979f4bc9723a1be93feef1492db50a6d31" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776478 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-reaper" containerID="cri-o://dc059a4299aaa5e0039676b11749b1ff11d523783abb720b1db4fca1b57d8a02" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776575 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-auditor" containerID="cri-o://3ab4ab844783fa31daf1c1eed13d6cad654b268a5cebed800beb83b2b4076a10" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776587 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-replicator" containerID="cri-o://502b54aa63f153278d1af53d6e2ef57ee86668bc1ca4b9331e43f7e1d8fcdd51" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776465 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-server" containerID="cri-o://250db73b99466a6d136c29b5ddb443fea1455c9b3f051000bc5c30d2a3dcac0d" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776638 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-replicator" containerID="cri-o://1de988ae783d7ef322b32e03cec233e8d6a73b90c66b17400298df3da2c6bba3" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776662 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-expirer" containerID="cri-o://087709f07a16a8956cad97cec775636bfa983adaa6627cebd8289db5e77fc582" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776686 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="swift-recon-cron" containerID="cri-o://87120a710046f2e75116a16c4179bf49847f21569c6c405cde1ad7b2f9011407" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776715 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="rsync" containerID="cri-o://3769f301e625ab3cce3a06cc29e9d5f5bb2ae84bd6b08ca2cb7bb3f7aabb6511" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776742 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-auditor" containerID="cri-o://067116e8aa6dadfeb22d2c041ee5c818ebc935d4f59ceeefd77867071352b8cb" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776766 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-updater" containerID="cri-o://462d03384382a6f3fb4523829751723bfeacf1bcf107bf6627d59de69d3cc69c" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776791 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-replicator" containerID="cri-o://829eb8a3a323c6c98f85abad5a6e6c8ae17563e61b17350c95f76c0df7a70f82" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.776901 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-server" containerID="cri-o://07418b09ea9b43e2f4b1393bd07f96ae9987062bed63bf2dcc8bd66e1db90bc0" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.789509 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-f72hz"] Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.789790 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerName="proxy-httpd" containerID="cri-o://760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f" gracePeriod=30 Jan 31 07:15:43 crc kubenswrapper[4687]: I0131 07:15:43.789853 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerName="proxy-server" containerID="cri-o://55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6" gracePeriod=30 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.245722 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="087709f07a16a8956cad97cec775636bfa983adaa6627cebd8289db5e77fc582" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246033 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="462d03384382a6f3fb4523829751723bfeacf1bcf107bf6627d59de69d3cc69c" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246041 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="067116e8aa6dadfeb22d2c041ee5c818ebc935d4f59ceeefd77867071352b8cb" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246048 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="829eb8a3a323c6c98f85abad5a6e6c8ae17563e61b17350c95f76c0df7a70f82" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246055 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="29971351b38387c34c20fe50e6de67979f4bc9723a1be93feef1492db50a6d31" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246061 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="3ab4ab844783fa31daf1c1eed13d6cad654b268a5cebed800beb83b2b4076a10" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246069 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="1de988ae783d7ef322b32e03cec233e8d6a73b90c66b17400298df3da2c6bba3" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246075 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="dc059a4299aaa5e0039676b11749b1ff11d523783abb720b1db4fca1b57d8a02" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246082 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="30c8a9046e479dd3d4719b5b38bd785ecc1a69005467729281cf8324e096a6d8" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246089 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="502b54aa63f153278d1af53d6e2ef57ee86668bc1ca4b9331e43f7e1d8fcdd51" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246171 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"087709f07a16a8956cad97cec775636bfa983adaa6627cebd8289db5e77fc582"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246206 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"462d03384382a6f3fb4523829751723bfeacf1bcf107bf6627d59de69d3cc69c"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"067116e8aa6dadfeb22d2c041ee5c818ebc935d4f59ceeefd77867071352b8cb"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"829eb8a3a323c6c98f85abad5a6e6c8ae17563e61b17350c95f76c0df7a70f82"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246251 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"29971351b38387c34c20fe50e6de67979f4bc9723a1be93feef1492db50a6d31"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"3ab4ab844783fa31daf1c1eed13d6cad654b268a5cebed800beb83b2b4076a10"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246273 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"1de988ae783d7ef322b32e03cec233e8d6a73b90c66b17400298df3da2c6bba3"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246284 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"dc059a4299aaa5e0039676b11749b1ff11d523783abb720b1db4fca1b57d8a02"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246294 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"30c8a9046e479dd3d4719b5b38bd785ecc1a69005467729281cf8324e096a6d8"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.246304 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"502b54aa63f153278d1af53d6e2ef57ee86668bc1ca4b9331e43f7e1d8fcdd51"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.248064 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerID="760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f" exitCode=0 Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.248115 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" event={"ID":"3b574508-eb1c-4b61-bc77-3878a38f36f3","Type":"ContainerDied","Data":"760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f"} Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.785785 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.979567 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b574508-eb1c-4b61-bc77-3878a38f36f3-config-data\") pod \"3b574508-eb1c-4b61-bc77-3878a38f36f3\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.979698 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-log-httpd\") pod \"3b574508-eb1c-4b61-bc77-3878a38f36f3\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.979723 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") pod \"3b574508-eb1c-4b61-bc77-3878a38f36f3\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.979765 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ltmj\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-kube-api-access-7ltmj\") pod \"3b574508-eb1c-4b61-bc77-3878a38f36f3\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.980539 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b574508-eb1c-4b61-bc77-3878a38f36f3" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.981145 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-run-httpd\") pod \"3b574508-eb1c-4b61-bc77-3878a38f36f3\" (UID: \"3b574508-eb1c-4b61-bc77-3878a38f36f3\") " Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.981611 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.985320 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-kube-api-access-7ltmj" (OuterVolumeSpecName: "kube-api-access-7ltmj") pod "3b574508-eb1c-4b61-bc77-3878a38f36f3" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3"). InnerVolumeSpecName "kube-api-access-7ltmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.987615 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b574508-eb1c-4b61-bc77-3878a38f36f3" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:44 crc kubenswrapper[4687]: I0131 07:15:44.988797 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3b574508-eb1c-4b61-bc77-3878a38f36f3" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.020065 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b574508-eb1c-4b61-bc77-3878a38f36f3-config-data" (OuterVolumeSpecName: "config-data") pod "3b574508-eb1c-4b61-bc77-3878a38f36f3" (UID: "3b574508-eb1c-4b61-bc77-3878a38f36f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.082570 4687 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.082624 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ltmj\" (UniqueName: \"kubernetes.io/projected/3b574508-eb1c-4b61-bc77-3878a38f36f3-kube-api-access-7ltmj\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.082640 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b574508-eb1c-4b61-bc77-3878a38f36f3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.082653 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b574508-eb1c-4b61-bc77-3878a38f36f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.270725 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="3769f301e625ab3cce3a06cc29e9d5f5bb2ae84bd6b08ca2cb7bb3f7aabb6511" exitCode=0 Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.270762 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="07418b09ea9b43e2f4b1393bd07f96ae9987062bed63bf2dcc8bd66e1db90bc0" exitCode=0 Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.270771 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="250db73b99466a6d136c29b5ddb443fea1455c9b3f051000bc5c30d2a3dcac0d" exitCode=0 Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.270778 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="57255eff28aadc0f504b048b696e5785a65bddda1c04167b42793b0ae630f5f8" exitCode=0 Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.270849 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"3769f301e625ab3cce3a06cc29e9d5f5bb2ae84bd6b08ca2cb7bb3f7aabb6511"} Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.270875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"07418b09ea9b43e2f4b1393bd07f96ae9987062bed63bf2dcc8bd66e1db90bc0"} Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.270884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"250db73b99466a6d136c29b5ddb443fea1455c9b3f051000bc5c30d2a3dcac0d"} Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.270893 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"57255eff28aadc0f504b048b696e5785a65bddda1c04167b42793b0ae630f5f8"} Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.273001 4687 generic.go:334] "Generic (PLEG): container finished" podID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerID="55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6" exitCode=0 Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.273026 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" event={"ID":"3b574508-eb1c-4b61-bc77-3878a38f36f3","Type":"ContainerDied","Data":"55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6"} Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.273043 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" event={"ID":"3b574508-eb1c-4b61-bc77-3878a38f36f3","Type":"ContainerDied","Data":"7e13435e423dd8ab2fb232fc66d1b74519ffa22cdb10a3857de92b9910fd1794"} Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.273060 4687 scope.go:117] "RemoveContainer" containerID="55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.273183 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6d699db77c-f72hz" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.294044 4687 scope.go:117] "RemoveContainer" containerID="760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.305388 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-f72hz"] Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.310648 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-f72hz"] Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.314631 4687 scope.go:117] "RemoveContainer" containerID="55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.315142 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6\": container with ID starting with 55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6 not found: ID does not exist" containerID="55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.315179 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6"} err="failed to get container status \"55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6\": rpc error: code = NotFound desc = could not find container \"55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6\": container with ID starting with 55865aee0d7ad1e7ca4fefbcebbfc24f0bf9203cbd8dfeb846d0394ee774abd6 not found: ID does not exist" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.315199 4687 scope.go:117] "RemoveContainer" containerID="760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.315454 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f\": container with ID starting with 760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f not found: ID does not exist" containerID="760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.315472 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f"} err="failed to get container status \"760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f\": rpc error: code = NotFound desc = could not find container \"760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f\": container with ID starting with 760050e5bc449d6233b42570fba8a91e31fe01bb287307a4c536a3eebc531b0f not found: ID does not exist" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.405383 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-pg8vx"] Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.413707 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-ttw96"] Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.422418 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-pg8vx"] Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.427617 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-ttw96"] Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.434917 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-7f864d6549-bfflx"] Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.435151 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" podUID="be44d699-42c9-4e7f-a533-8b39328ceedd" containerName="keystone-api" containerID="cri-o://29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a" gracePeriod=30 Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.456770 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone1184-account-delete-9tvfj"] Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457061 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerName="registry-server" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457084 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerName="registry-server" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457100 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerName="proxy-server" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457105 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerName="proxy-server" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457113 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerName="registry-server" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457120 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerName="registry-server" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457131 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457140 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457154 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerName="extract-content" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457162 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerName="extract-content" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457175 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457181 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457197 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerName="extract-utilities" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457209 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerName="extract-utilities" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457225 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerName="proxy-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457235 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerName="proxy-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457248 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerName="extract-content" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457255 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerName="extract-content" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457269 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457276 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457287 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerName="extract-utilities" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457294 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerName="extract-utilities" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457306 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca37316-630c-44e6-ab4c-5beb44c545de" containerName="mariadb-account-delete" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457312 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca37316-630c-44e6-ab4c-5beb44c545de" containerName="mariadb-account-delete" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457320 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457326 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457336 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457341 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457349 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457355 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457365 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457370 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: E0131 07:15:45.457383 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457390 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457517 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457528 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457538 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457546 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerName="proxy-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457556 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457565 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f97349-9bfe-4c6e-bddb-a40db8f381b0" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457573 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca37316-630c-44e6-ab4c-5beb44c545de" containerName="mariadb-account-delete" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457583 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9381a705-2254-4dc0-84c4-5b510d03ae92" containerName="registry-server" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457589 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="787ae24c-3f78-4d06-b797-e50650509346" containerName="glance-log" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457597 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="354c2b73-6b6c-4b19-b1e3-1bb8e221150a" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457605 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fb68cd-53bd-4337-b199-605b7c23c33b" containerName="glance-httpd" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457613 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" containerName="proxy-server" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.457621 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05210f4-4b71-4670-a1f5-e66c2cd1056c" containerName="registry-server" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.458089 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.467070 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone1184-account-delete-9tvfj"] Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.485973 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts\") pod \"keystone1184-account-delete-9tvfj\" (UID: \"40f24838-c89e-4787-bd07-80871dd0bece\") " pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.486040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2kw6\" (UniqueName: \"kubernetes.io/projected/40f24838-c89e-4787-bd07-80871dd0bece-kube-api-access-c2kw6\") pod \"keystone1184-account-delete-9tvfj\" (UID: \"40f24838-c89e-4787-bd07-80871dd0bece\") " pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.586645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2kw6\" (UniqueName: \"kubernetes.io/projected/40f24838-c89e-4787-bd07-80871dd0bece-kube-api-access-c2kw6\") pod \"keystone1184-account-delete-9tvfj\" (UID: \"40f24838-c89e-4787-bd07-80871dd0bece\") " pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.586762 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts\") pod \"keystone1184-account-delete-9tvfj\" (UID: \"40f24838-c89e-4787-bd07-80871dd0bece\") " pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.587697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts\") pod \"keystone1184-account-delete-9tvfj\" (UID: \"40f24838-c89e-4787-bd07-80871dd0bece\") " pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.610998 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264870fa-efbf-41ea-9a90-6e154d696b02" path="/var/lib/kubelet/pods/264870fa-efbf-41ea-9a90-6e154d696b02/volumes" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.611715 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b574508-eb1c-4b61-bc77-3878a38f36f3" path="/var/lib/kubelet/pods/3b574508-eb1c-4b61-bc77-3878a38f36f3/volumes" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.612572 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68acc278-6e5f-44d7-88ec-25ed80bda714" path="/var/lib/kubelet/pods/68acc278-6e5f-44d7-88ec-25ed80bda714/volumes" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.613727 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766b071f-fb29-43d1-be22-a261a8cb787c" path="/var/lib/kubelet/pods/766b071f-fb29-43d1-be22-a261a8cb787c/volumes" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.616842 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2kw6\" (UniqueName: \"kubernetes.io/projected/40f24838-c89e-4787-bd07-80871dd0bece-kube-api-access-c2kw6\") pod \"keystone1184-account-delete-9tvfj\" (UID: \"40f24838-c89e-4787-bd07-80871dd0bece\") " pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:45 crc kubenswrapper[4687]: I0131 07:15:45.774726 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.231315 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/root-account-create-update-lvsnd"] Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.232789 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.236865 4687 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.364139 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/root-account-create-update-lvsnd"] Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.384116 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.392350 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.409087 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.432266 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone1184-account-delete-9tvfj"] Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.434080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkzt9\" (UniqueName: \"kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9\") pod \"root-account-create-update-lvsnd\" (UID: \"e4e7f2df-a655-413a-8b76-063ef8a2e338\") " pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.434148 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts\") pod \"root-account-create-update-lvsnd\" (UID: \"e4e7f2df-a655-413a-8b76-063ef8a2e338\") " pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.447811 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/root-account-create-update-lvsnd"] Jan 31 07:15:46 crc kubenswrapper[4687]: E0131 07:15:46.448393 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bkzt9 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="glance-kuttl-tests/root-account-create-update-lvsnd" podUID="e4e7f2df-a655-413a-8b76-063ef8a2e338" Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.535098 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkzt9\" (UniqueName: \"kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9\") pod \"root-account-create-update-lvsnd\" (UID: \"e4e7f2df-a655-413a-8b76-063ef8a2e338\") " pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.535177 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts\") pod \"root-account-create-update-lvsnd\" (UID: \"e4e7f2df-a655-413a-8b76-063ef8a2e338\") " pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:46 crc kubenswrapper[4687]: E0131 07:15:46.535306 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 07:15:46 crc kubenswrapper[4687]: E0131 07:15:46.535366 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts podName:e4e7f2df-a655-413a-8b76-063ef8a2e338 nodeName:}" failed. No retries permitted until 2026-01-31 07:15:47.035345962 +0000 UTC m=+1973.312605537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts") pod "root-account-create-update-lvsnd" (UID: "e4e7f2df-a655-413a-8b76-063ef8a2e338") : configmap "openstack-scripts" not found Jan 31 07:15:46 crc kubenswrapper[4687]: E0131 07:15:46.541574 4687 projected.go:194] Error preparing data for projected volume kube-api-access-bkzt9 for pod glance-kuttl-tests/root-account-create-update-lvsnd: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 07:15:46 crc kubenswrapper[4687]: E0131 07:15:46.541653 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9 podName:e4e7f2df-a655-413a-8b76-063ef8a2e338 nodeName:}" failed. No retries permitted until 2026-01-31 07:15:47.041632414 +0000 UTC m=+1973.318891989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bkzt9" (UniqueName: "kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9") pod "root-account-create-update-lvsnd" (UID: "e4e7f2df-a655-413a-8b76-063ef8a2e338") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.580957 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstack-galera-2" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerName="galera" containerID="cri-o://c043d3184ab54a35d1e0f449d503797f83fe59efcc6761224fdebfe2d46a168b" gracePeriod=30 Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.879182 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 07:15:46 crc kubenswrapper[4687]: I0131 07:15:46.879445 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/memcached-0" podUID="7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" containerName="memcached" containerID="cri-o://e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb" gracePeriod=30 Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.041956 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkzt9\" (UniqueName: \"kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9\") pod \"root-account-create-update-lvsnd\" (UID: \"e4e7f2df-a655-413a-8b76-063ef8a2e338\") " pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.042335 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts\") pod \"root-account-create-update-lvsnd\" (UID: \"e4e7f2df-a655-413a-8b76-063ef8a2e338\") " pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:47 crc kubenswrapper[4687]: E0131 07:15:47.042480 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 07:15:47 crc kubenswrapper[4687]: E0131 07:15:47.042531 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts podName:e4e7f2df-a655-413a-8b76-063ef8a2e338 nodeName:}" failed. No retries permitted until 2026-01-31 07:15:48.042517667 +0000 UTC m=+1974.319777242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts") pod "root-account-create-update-lvsnd" (UID: "e4e7f2df-a655-413a-8b76-063ef8a2e338") : configmap "openstack-scripts" not found Jan 31 07:15:47 crc kubenswrapper[4687]: E0131 07:15:47.045797 4687 projected.go:194] Error preparing data for projected volume kube-api-access-bkzt9 for pod glance-kuttl-tests/root-account-create-update-lvsnd: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 07:15:47 crc kubenswrapper[4687]: E0131 07:15:47.045877 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9 podName:e4e7f2df-a655-413a-8b76-063ef8a2e338 nodeName:}" failed. No retries permitted until 2026-01-31 07:15:48.045855659 +0000 UTC m=+1974.323115234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bkzt9" (UniqueName: "kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9") pod "root-account-create-update-lvsnd" (UID: "e4e7f2df-a655-413a-8b76-063ef8a2e338") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.339364 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.383182 4687 generic.go:334] "Generic (PLEG): container finished" podID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerID="c043d3184ab54a35d1e0f449d503797f83fe59efcc6761224fdebfe2d46a168b" exitCode=0 Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.383260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6","Type":"ContainerDied","Data":"c043d3184ab54a35d1e0f449d503797f83fe59efcc6761224fdebfe2d46a168b"} Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.385298 4687 generic.go:334] "Generic (PLEG): container finished" podID="40f24838-c89e-4787-bd07-80871dd0bece" containerID="f14efaaa928709d878ca8e0fb3a5b9bdc560fb78756e0f77d0e370db19910c99" exitCode=1 Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.385398 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.385535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" event={"ID":"40f24838-c89e-4787-bd07-80871dd0bece","Type":"ContainerDied","Data":"f14efaaa928709d878ca8e0fb3a5b9bdc560fb78756e0f77d0e370db19910c99"} Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.385839 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" event={"ID":"40f24838-c89e-4787-bd07-80871dd0bece","Type":"ContainerStarted","Data":"98122d17e311c92ca0af403368b2059f21b7156ba1aab6c6f56f2d3c8cbc77b4"} Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.386155 4687 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" secret="" err="secret \"galera-openstack-dockercfg-wtz8g\" not found" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.386205 4687 scope.go:117] "RemoveContainer" containerID="f14efaaa928709d878ca8e0fb3a5b9bdc560fb78756e0f77d0e370db19910c99" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.397756 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:47 crc kubenswrapper[4687]: E0131 07:15:47.442463 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.23:33220->38.102.83.23:45455: write tcp 38.102.83.23:33220->38.102.83.23:45455: write: broken pipe Jan 31 07:15:47 crc kubenswrapper[4687]: E0131 07:15:47.551541 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 07:15:47 crc kubenswrapper[4687]: E0131 07:15:47.551627 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts podName:40f24838-c89e-4787-bd07-80871dd0bece nodeName:}" failed. No retries permitted until 2026-01-31 07:15:48.051608465 +0000 UTC m=+1974.328868040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts") pod "keystone1184-account-delete-9tvfj" (UID: "40f24838-c89e-4787-bd07-80871dd0bece") : configmap "openstack-scripts" not found Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.692592 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.757567 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.799667 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/rabbitmq-server-0" podUID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerName="rabbitmq" containerID="cri-o://1a9e11626e862f9e085c571a1f0dccd5f1c46c3ae1bbacf1035e66065b30d721" gracePeriod=604800 Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.848059 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/rabbitmq-server-0" podUID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.62:5672: connect: connection refused" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.854811 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.854942 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-default\") pod \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.855149 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-generated\") pod \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.855215 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-operator-scripts\") pod \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.855300 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4m96\" (UniqueName: \"kubernetes.io/projected/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kube-api-access-x4m96\") pod \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.855342 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kolla-config\") pod \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\" (UID: \"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6\") " Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.855546 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" (UID: "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.855868 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.856089 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" (UID: "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.856086 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" (UID: "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.856288 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" (UID: "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.861612 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kube-api-access-x4m96" (OuterVolumeSpecName: "kube-api-access-x4m96") pod "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" (UID: "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6"). InnerVolumeSpecName "kube-api-access-x4m96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.884402 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" (UID: "fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.958043 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4m96\" (UniqueName: \"kubernetes.io/projected/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kube-api-access-x4m96\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.958086 4687 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.958122 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.958137 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.958147 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:47 crc kubenswrapper[4687]: I0131 07:15:47.978404 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.059430 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkzt9\" (UniqueName: \"kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9\") pod \"root-account-create-update-lvsnd\" (UID: \"e4e7f2df-a655-413a-8b76-063ef8a2e338\") " pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.059510 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts\") pod \"root-account-create-update-lvsnd\" (UID: \"e4e7f2df-a655-413a-8b76-063ef8a2e338\") " pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.059590 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.059663 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.059728 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts podName:40f24838-c89e-4787-bd07-80871dd0bece nodeName:}" failed. No retries permitted until 2026-01-31 07:15:49.059707845 +0000 UTC m=+1975.336967420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts") pod "keystone1184-account-delete-9tvfj" (UID: "40f24838-c89e-4787-bd07-80871dd0bece") : configmap "openstack-scripts" not found Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.059826 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.060007 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts podName:e4e7f2df-a655-413a-8b76-063ef8a2e338 nodeName:}" failed. No retries permitted until 2026-01-31 07:15:50.059971922 +0000 UTC m=+1976.337231687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts") pod "root-account-create-update-lvsnd" (UID: "e4e7f2df-a655-413a-8b76-063ef8a2e338") : configmap "openstack-scripts" not found Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.071899 4687 projected.go:194] Error preparing data for projected volume kube-api-access-bkzt9 for pod glance-kuttl-tests/root-account-create-update-lvsnd: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.071986 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9 podName:e4e7f2df-a655-413a-8b76-063ef8a2e338 nodeName:}" failed. No retries permitted until 2026-01-31 07:15:50.07196384 +0000 UTC m=+1976.349223415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bkzt9" (UniqueName: "kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9") pod "root-account-create-update-lvsnd" (UID: "e4e7f2df-a655-413a-8b76-063ef8a2e338") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.225742 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.364314 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kolla-config\") pod \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.364646 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvlc7\" (UniqueName: \"kubernetes.io/projected/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kube-api-access-nvlc7\") pod \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.364696 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-config-data\") pod \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\" (UID: \"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8\") " Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.364915 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" (UID: "7186f0a0-8f6a-465e-b18d-be6b3b28d1c8"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.365167 4687 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.365295 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-config-data" (OuterVolumeSpecName: "config-data") pod "7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" (UID: "7186f0a0-8f6a-465e-b18d-be6b3b28d1c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.379802 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kube-api-access-nvlc7" (OuterVolumeSpecName: "kube-api-access-nvlc7") pod "7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" (UID: "7186f0a0-8f6a-465e-b18d-be6b3b28d1c8"). InnerVolumeSpecName "kube-api-access-nvlc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.395869 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.395869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6","Type":"ContainerDied","Data":"a455cab65216009dd0964f2f5140fe7682f00c9bf94612d96d740821ae51b381"} Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.395935 4687 scope.go:117] "RemoveContainer" containerID="c043d3184ab54a35d1e0f449d503797f83fe59efcc6761224fdebfe2d46a168b" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.402750 4687 generic.go:334] "Generic (PLEG): container finished" podID="7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" containerID="e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb" exitCode=0 Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.402844 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8","Type":"ContainerDied","Data":"e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb"} Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.402875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"7186f0a0-8f6a-465e-b18d-be6b3b28d1c8","Type":"ContainerDied","Data":"c83acd8175d29811791030a8ff2b871abd4624afb9f8c503e0de3353544a4a54"} Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.402932 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.421144 4687 generic.go:334] "Generic (PLEG): container finished" podID="40f24838-c89e-4787-bd07-80871dd0bece" containerID="f0576bdc41772ccaaa997faa132f364b41a83b318123f7022c4916593ea8cd3a" exitCode=1 Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.421212 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-lvsnd" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.421808 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" event={"ID":"40f24838-c89e-4787-bd07-80871dd0bece","Type":"ContainerDied","Data":"f0576bdc41772ccaaa997faa132f364b41a83b318123f7022c4916593ea8cd3a"} Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.422519 4687 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" secret="" err="secret \"galera-openstack-dockercfg-wtz8g\" not found" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.422683 4687 scope.go:117] "RemoveContainer" containerID="f0576bdc41772ccaaa997faa132f364b41a83b318123f7022c4916593ea8cd3a" Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.423057 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone1184-account-delete-9tvfj_glance-kuttl-tests(40f24838-c89e-4787-bd07-80871dd0bece)\"" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" podUID="40f24838-c89e-4787-bd07-80871dd0bece" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.447585 4687 scope.go:117] "RemoveContainer" containerID="b4d1d2310481ed255cf1785b3f923d2133eb8ab1ec6ca22e85e878bdb467855e" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.452459 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.464501 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.466153 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvlc7\" (UniqueName: \"kubernetes.io/projected/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-kube-api-access-nvlc7\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.466192 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.484102 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/root-account-create-update-lvsnd"] Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.490218 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/root-account-create-update-lvsnd"] Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.492428 4687 scope.go:117] "RemoveContainer" containerID="e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.521810 4687 scope.go:117] "RemoveContainer" containerID="e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.523751 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.523866 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb\": container with ID starting with e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb not found: ID does not exist" containerID="e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.523910 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb"} err="failed to get container status \"e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb\": rpc error: code = NotFound desc = could not find container \"e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb\": container with ID starting with e4647911bc50b591925814fdb85949e6e40a2cfc1eecb5a61c8bb5f387d31ebb not found: ID does not exist" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.523942 4687 scope.go:117] "RemoveContainer" containerID="f14efaaa928709d878ca8e0fb3a5b9bdc560fb78756e0f77d0e370db19910c99" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.530465 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.614033 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstack-galera-1" podUID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerName="galera" containerID="cri-o://a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6" gracePeriod=28 Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.669340 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkzt9\" (UniqueName: \"kubernetes.io/projected/e4e7f2df-a655-413a-8b76-063ef8a2e338-kube-api-access-bkzt9\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.669377 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4e7f2df-a655-413a-8b76-063ef8a2e338-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.698186 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz"] Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.698392 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" podUID="f6787f12-c3f6-4611-b5b0-1b26155d4d41" containerName="manager" containerID="cri-o://33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982" gracePeriod=10 Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.807823 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.810014 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.812100 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 07:15:48 crc kubenswrapper[4687]: E0131 07:15:48.812219 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="glance-kuttl-tests/openstack-galera-1" podUID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerName="galera" Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.890820 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-h6w75"] Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.891008 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/glance-operator-index-h6w75" podUID="47d8e3aa-adce-49bd-8e29-a0adeea6009e" containerName="registry-server" containerID="cri-o://8d031d0a222d46ec2116b63d32a7056ffd2315cc8cb1ed1a26c67f9f74410faf" gracePeriod=30 Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.933704 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp"] Jan 31 07:15:48 crc kubenswrapper[4687]: I0131 07:15:48.939538 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/f27ba61e30ed7a31f8d8765010bbc0b4d51ac80fef52fda4c758a1466dfpssp"] Jan 31 07:15:49 crc kubenswrapper[4687]: E0131 07:15:49.075132 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 07:15:49 crc kubenswrapper[4687]: E0131 07:15:49.075223 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts podName:40f24838-c89e-4787-bd07-80871dd0bece nodeName:}" failed. No retries permitted until 2026-01-31 07:15:51.075201116 +0000 UTC m=+1977.352460691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts") pod "keystone1184-account-delete-9tvfj" (UID: "40f24838-c89e-4787-bd07-80871dd0bece") : configmap "openstack-scripts" not found Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.117729 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.207473 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.280580 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvwfm\" (UniqueName: \"kubernetes.io/projected/be44d699-42c9-4e7f-a533-8b39328ceedd-kube-api-access-hvwfm\") pod \"be44d699-42c9-4e7f-a533-8b39328ceedd\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.280647 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-config-data\") pod \"be44d699-42c9-4e7f-a533-8b39328ceedd\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.280677 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-scripts\") pod \"be44d699-42c9-4e7f-a533-8b39328ceedd\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.280715 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-fernet-keys\") pod \"be44d699-42c9-4e7f-a533-8b39328ceedd\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.280760 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-credential-keys\") pod \"be44d699-42c9-4e7f-a533-8b39328ceedd\" (UID: \"be44d699-42c9-4e7f-a533-8b39328ceedd\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.289767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "be44d699-42c9-4e7f-a533-8b39328ceedd" (UID: "be44d699-42c9-4e7f-a533-8b39328ceedd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.291053 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be44d699-42c9-4e7f-a533-8b39328ceedd-kube-api-access-hvwfm" (OuterVolumeSpecName: "kube-api-access-hvwfm") pod "be44d699-42c9-4e7f-a533-8b39328ceedd" (UID: "be44d699-42c9-4e7f-a533-8b39328ceedd"). InnerVolumeSpecName "kube-api-access-hvwfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.294334 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "be44d699-42c9-4e7f-a533-8b39328ceedd" (UID: "be44d699-42c9-4e7f-a533-8b39328ceedd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.297628 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-scripts" (OuterVolumeSpecName: "scripts") pod "be44d699-42c9-4e7f-a533-8b39328ceedd" (UID: "be44d699-42c9-4e7f-a533-8b39328ceedd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.347710 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-config-data" (OuterVolumeSpecName: "config-data") pod "be44d699-42c9-4e7f-a533-8b39328ceedd" (UID: "be44d699-42c9-4e7f-a533-8b39328ceedd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.381847 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpc7l\" (UniqueName: \"kubernetes.io/projected/f6787f12-c3f6-4611-b5b0-1b26155d4d41-kube-api-access-gpc7l\") pod \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.381921 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-webhook-cert\") pod \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.411592 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6787f12-c3f6-4611-b5b0-1b26155d4d41-kube-api-access-gpc7l" (OuterVolumeSpecName: "kube-api-access-gpc7l") pod "f6787f12-c3f6-4611-b5b0-1b26155d4d41" (UID: "f6787f12-c3f6-4611-b5b0-1b26155d4d41"). InnerVolumeSpecName "kube-api-access-gpc7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.414492 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-apiservice-cert\") pod \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\" (UID: \"f6787f12-c3f6-4611-b5b0-1b26155d4d41\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.415067 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvwfm\" (UniqueName: \"kubernetes.io/projected/be44d699-42c9-4e7f-a533-8b39328ceedd-kube-api-access-hvwfm\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.415082 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.415093 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.415103 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.415113 4687 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be44d699-42c9-4e7f-a533-8b39328ceedd-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.424650 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f6787f12-c3f6-4611-b5b0-1b26155d4d41" (UID: "f6787f12-c3f6-4611-b5b0-1b26155d4d41"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.431645 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "f6787f12-c3f6-4611-b5b0-1b26155d4d41" (UID: "f6787f12-c3f6-4611-b5b0-1b26155d4d41"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.456196 4687 generic.go:334] "Generic (PLEG): container finished" podID="47d8e3aa-adce-49bd-8e29-a0adeea6009e" containerID="8d031d0a222d46ec2116b63d32a7056ffd2315cc8cb1ed1a26c67f9f74410faf" exitCode=0 Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.456303 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-h6w75" event={"ID":"47d8e3aa-adce-49bd-8e29-a0adeea6009e","Type":"ContainerDied","Data":"8d031d0a222d46ec2116b63d32a7056ffd2315cc8cb1ed1a26c67f9f74410faf"} Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.459673 4687 generic.go:334] "Generic (PLEG): container finished" podID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerID="1a9e11626e862f9e085c571a1f0dccd5f1c46c3ae1bbacf1035e66065b30d721" exitCode=0 Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.459741 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"33674fdf-dc91-46fd-a4d5-795ff7fd4211","Type":"ContainerDied","Data":"1a9e11626e862f9e085c571a1f0dccd5f1c46c3ae1bbacf1035e66065b30d721"} Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.460913 4687 generic.go:334] "Generic (PLEG): container finished" podID="be44d699-42c9-4e7f-a533-8b39328ceedd" containerID="29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a" exitCode=0 Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.460959 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" event={"ID":"be44d699-42c9-4e7f-a533-8b39328ceedd","Type":"ContainerDied","Data":"29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a"} Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.460980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" event={"ID":"be44d699-42c9-4e7f-a533-8b39328ceedd","Type":"ContainerDied","Data":"838c6f70b7065b4ab23a891259428382176a70a51c4d7caa78e1e68b7500a812"} Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.461002 4687 scope.go:117] "RemoveContainer" containerID="29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.461138 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-7f864d6549-bfflx" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.478142 4687 generic.go:334] "Generic (PLEG): container finished" podID="f6787f12-c3f6-4611-b5b0-1b26155d4d41" containerID="33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982" exitCode=0 Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.478234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" event={"ID":"f6787f12-c3f6-4611-b5b0-1b26155d4d41","Type":"ContainerDied","Data":"33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982"} Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.478260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" event={"ID":"f6787f12-c3f6-4611-b5b0-1b26155d4d41","Type":"ContainerDied","Data":"12f814a7f9d4dc47da3e7c033411ad7bfc305469ded99ea668c3de867e2237ff"} Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.478307 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.515688 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.515941 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpc7l\" (UniqueName: \"kubernetes.io/projected/f6787f12-c3f6-4611-b5b0-1b26155d4d41-kube-api-access-gpc7l\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.515953 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6787f12-c3f6-4611-b5b0-1b26155d4d41-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.545711 4687 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" secret="" err="secret \"galera-openstack-dockercfg-wtz8g\" not found" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.545761 4687 scope.go:117] "RemoveContainer" containerID="f0576bdc41772ccaaa997faa132f364b41a83b318123f7022c4916593ea8cd3a" Jan 31 07:15:49 crc kubenswrapper[4687]: E0131 07:15:49.546119 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone1184-account-delete-9tvfj_glance-kuttl-tests(40f24838-c89e-4787-bd07-80871dd0bece)\"" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" podUID="40f24838-c89e-4787-bd07-80871dd0bece" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.548873 4687 scope.go:117] "RemoveContainer" containerID="29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a" Jan 31 07:15:49 crc kubenswrapper[4687]: E0131 07:15:49.550562 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a\": container with ID starting with 29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a not found: ID does not exist" containerID="29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.550614 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a"} err="failed to get container status \"29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a\": rpc error: code = NotFound desc = could not find container \"29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a\": container with ID starting with 29b31b5466c245cfbc7695198a62a9fbcab112ad827e3a840d0e4cc528776e6a not found: ID does not exist" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.550673 4687 scope.go:117] "RemoveContainer" containerID="33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.570811 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.590306 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz"] Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.678010 4687 scope.go:117] "RemoveContainer" containerID="33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982" Jan 31 07:15:49 crc kubenswrapper[4687]: E0131 07:15:49.678910 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982\": container with ID starting with 33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982 not found: ID does not exist" containerID="33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.678969 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982"} err="failed to get container status \"33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982\": rpc error: code = NotFound desc = could not find container \"33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982\": container with ID starting with 33fe9e25a9ae88caf37b956f1812aa55a4cbd370a72ce83971048395f4299982 not found: ID does not exist" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.710620 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d29e2c7-9c78-4903-938a-8feed8644190" path="/var/lib/kubelet/pods/1d29e2c7-9c78-4903-938a-8feed8644190/volumes" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.711493 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" path="/var/lib/kubelet/pods/7186f0a0-8f6a-465e-b18d-be6b3b28d1c8/volumes" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.711905 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e7f2df-a655-413a-8b76-063ef8a2e338" path="/var/lib/kubelet/pods/e4e7f2df-a655-413a-8b76-063ef8a2e338/volumes" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.712878 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" path="/var/lib/kubelet/pods/fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6/volumes" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.717318 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/glance-operator-controller-manager-66ccc6f9f9-68gsz"] Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.717387 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-7f864d6549-bfflx"] Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.717401 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-7f864d6549-bfflx"] Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.728348 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.742489 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\") pod \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.742809 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp6c5\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-kube-api-access-rp6c5\") pod \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.742969 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33674fdf-dc91-46fd-a4d5-795ff7fd4211-pod-info\") pod \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.743084 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-confd\") pod \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.743262 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-plugins\") pod \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.743497 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-erlang-cookie\") pod \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.743643 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33674fdf-dc91-46fd-a4d5-795ff7fd4211-erlang-cookie-secret\") pod \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.743785 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33674fdf-dc91-46fd-a4d5-795ff7fd4211-plugins-conf\") pod \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\" (UID: \"33674fdf-dc91-46fd-a4d5-795ff7fd4211\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.744266 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "33674fdf-dc91-46fd-a4d5-795ff7fd4211" (UID: "33674fdf-dc91-46fd-a4d5-795ff7fd4211"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.745580 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33674fdf-dc91-46fd-a4d5-795ff7fd4211-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "33674fdf-dc91-46fd-a4d5-795ff7fd4211" (UID: "33674fdf-dc91-46fd-a4d5-795ff7fd4211"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.746961 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "33674fdf-dc91-46fd-a4d5-795ff7fd4211" (UID: "33674fdf-dc91-46fd-a4d5-795ff7fd4211"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.747873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33674fdf-dc91-46fd-a4d5-795ff7fd4211-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "33674fdf-dc91-46fd-a4d5-795ff7fd4211" (UID: "33674fdf-dc91-46fd-a4d5-795ff7fd4211"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.753012 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/33674fdf-dc91-46fd-a4d5-795ff7fd4211-pod-info" (OuterVolumeSpecName: "pod-info") pod "33674fdf-dc91-46fd-a4d5-795ff7fd4211" (UID: "33674fdf-dc91-46fd-a4d5-795ff7fd4211"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.754482 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-kube-api-access-rp6c5" (OuterVolumeSpecName: "kube-api-access-rp6c5") pod "33674fdf-dc91-46fd-a4d5-795ff7fd4211" (UID: "33674fdf-dc91-46fd-a4d5-795ff7fd4211"). InnerVolumeSpecName "kube-api-access-rp6c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.778893 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db" (OuterVolumeSpecName: "persistence") pod "33674fdf-dc91-46fd-a4d5-795ff7fd4211" (UID: "33674fdf-dc91-46fd-a4d5-795ff7fd4211"). InnerVolumeSpecName "pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.841371 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "33674fdf-dc91-46fd-a4d5-795ff7fd4211" (UID: "33674fdf-dc91-46fd-a4d5-795ff7fd4211"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845495 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bpb7\" (UniqueName: \"kubernetes.io/projected/47d8e3aa-adce-49bd-8e29-a0adeea6009e-kube-api-access-4bpb7\") pod \"47d8e3aa-adce-49bd-8e29-a0adeea6009e\" (UID: \"47d8e3aa-adce-49bd-8e29-a0adeea6009e\") " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845850 4687 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33674fdf-dc91-46fd-a4d5-795ff7fd4211-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845879 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845896 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845908 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33674fdf-dc91-46fd-a4d5-795ff7fd4211-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845921 4687 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33674fdf-dc91-46fd-a4d5-795ff7fd4211-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845932 4687 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33674fdf-dc91-46fd-a4d5-795ff7fd4211-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845961 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\") on node \"crc\" " Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.845976 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp6c5\" (UniqueName: \"kubernetes.io/projected/33674fdf-dc91-46fd-a4d5-795ff7fd4211-kube-api-access-rp6c5\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.853582 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d8e3aa-adce-49bd-8e29-a0adeea6009e-kube-api-access-4bpb7" (OuterVolumeSpecName: "kube-api-access-4bpb7") pod "47d8e3aa-adce-49bd-8e29-a0adeea6009e" (UID: "47d8e3aa-adce-49bd-8e29-a0adeea6009e"). InnerVolumeSpecName "kube-api-access-4bpb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.882318 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.882495 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db") on node "crc" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.947596 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bpb7\" (UniqueName: \"kubernetes.io/projected/47d8e3aa-adce-49bd-8e29-a0adeea6009e-kube-api-access-4bpb7\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:49 crc kubenswrapper[4687]: I0131 07:15:49.947649 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc05e10c-e311-4a4e-ba21-51e7d1fee9db\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.239307 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.239833 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" podUID="e229e979-1176-4e84-9dab-1027aee52b34" containerName="manager" containerID="cri-o://042f494a78d21700df8fb39607568af9066a7e2d66ad07dff7bfc862061b9adf" gracePeriod=10 Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.460761 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-tnwzr"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.460986 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-index-tnwzr" podUID="eab13481-b0e4-40a4-8541-7738638251a9" containerName="registry-server" containerID="cri-o://5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2" gracePeriod=30 Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.516614 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.527188 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4rf5b4"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.547688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-h6w75" event={"ID":"47d8e3aa-adce-49bd-8e29-a0adeea6009e","Type":"ContainerDied","Data":"dc04a9182c3fe270c2575db4f62b09ff1a5e0edd26f41c409efffb29bd4f204f"} Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.548001 4687 scope.go:117] "RemoveContainer" containerID="8d031d0a222d46ec2116b63d32a7056ffd2315cc8cb1ed1a26c67f9f74410faf" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.548005 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-h6w75" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.551686 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone1184-account-delete-9tvfj"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.562515 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"33674fdf-dc91-46fd-a4d5-795ff7fd4211","Type":"ContainerDied","Data":"b1d4679aa9cc40b8243af78fe84abc6bdb057cea1fe0720a3b62e6f4b727d447"} Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.563613 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.582809 4687 generic.go:334] "Generic (PLEG): container finished" podID="e229e979-1176-4e84-9dab-1027aee52b34" containerID="042f494a78d21700df8fb39607568af9066a7e2d66ad07dff7bfc862061b9adf" exitCode=0 Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.582913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" event={"ID":"e229e979-1176-4e84-9dab-1027aee52b34","Type":"ContainerDied","Data":"042f494a78d21700df8fb39607568af9066a7e2d66ad07dff7bfc862061b9adf"} Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.660906 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-h6w75"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.675937 4687 scope.go:117] "RemoveContainer" containerID="1a9e11626e862f9e085c571a1f0dccd5f1c46c3ae1bbacf1035e66065b30d721" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.676323 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/glance-operator-index-h6w75"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.685376 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstack-galera-0" podUID="ee3a4967-773c-4106-955e-ce3823c96169" containerName="galera" containerID="cri-o://55e095bf402d4decfeb0d7eab9463616f714666ced8929276007bd2c6f82ed79" gracePeriod=26 Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.687083 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.695024 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.700214 4687 scope.go:117] "RemoveContainer" containerID="703c0d772a929eebfafa746449afc703a9975ddbf680361a13ce0ddeaea5d41f" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.825871 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.969576 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-apiservice-cert\") pod \"e229e979-1176-4e84-9dab-1027aee52b34\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.969700 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bw8f\" (UniqueName: \"kubernetes.io/projected/e229e979-1176-4e84-9dab-1027aee52b34-kube-api-access-6bw8f\") pod \"e229e979-1176-4e84-9dab-1027aee52b34\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.969742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-webhook-cert\") pod \"e229e979-1176-4e84-9dab-1027aee52b34\" (UID: \"e229e979-1176-4e84-9dab-1027aee52b34\") " Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.975308 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e229e979-1176-4e84-9dab-1027aee52b34-kube-api-access-6bw8f" (OuterVolumeSpecName: "kube-api-access-6bw8f") pod "e229e979-1176-4e84-9dab-1027aee52b34" (UID: "e229e979-1176-4e84-9dab-1027aee52b34"). InnerVolumeSpecName "kube-api-access-6bw8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.975507 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "e229e979-1176-4e84-9dab-1027aee52b34" (UID: "e229e979-1176-4e84-9dab-1027aee52b34"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.976190 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e229e979-1176-4e84-9dab-1027aee52b34" (UID: "e229e979-1176-4e84-9dab-1027aee52b34"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:50 crc kubenswrapper[4687]: I0131 07:15:50.990452 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.071477 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.071517 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bw8f\" (UniqueName: \"kubernetes.io/projected/e229e979-1176-4e84-9dab-1027aee52b34-kube-api-access-6bw8f\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.071528 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e229e979-1176-4e84-9dab-1027aee52b34-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.173198 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts\") pod \"40f24838-c89e-4787-bd07-80871dd0bece\" (UID: \"40f24838-c89e-4787-bd07-80871dd0bece\") " Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.173266 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2kw6\" (UniqueName: \"kubernetes.io/projected/40f24838-c89e-4787-bd07-80871dd0bece-kube-api-access-c2kw6\") pod \"40f24838-c89e-4787-bd07-80871dd0bece\" (UID: \"40f24838-c89e-4787-bd07-80871dd0bece\") " Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.174183 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40f24838-c89e-4787-bd07-80871dd0bece" (UID: "40f24838-c89e-4787-bd07-80871dd0bece"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.178691 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f24838-c89e-4787-bd07-80871dd0bece-kube-api-access-c2kw6" (OuterVolumeSpecName: "kube-api-access-c2kw6") pod "40f24838-c89e-4787-bd07-80871dd0bece" (UID: "40f24838-c89e-4787-bd07-80871dd0bece"). InnerVolumeSpecName "kube-api-access-c2kw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.245255 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.275290 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f24838-c89e-4787-bd07-80871dd0bece-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:51 crc kubenswrapper[4687]: I0131 07:15:51.275325 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2kw6\" (UniqueName: \"kubernetes.io/projected/40f24838-c89e-4787-bd07-80871dd0bece-kube-api-access-c2kw6\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.375941 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.376046 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55vhm\" (UniqueName: \"kubernetes.io/projected/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kube-api-access-55vhm\") pod \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.376082 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kolla-config\") pod \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.376126 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-generated\") pod \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.376182 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-operator-scripts\") pod \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.376247 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-default\") pod \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\" (UID: \"0e0aeef7-ccda-496c-ba2b-ca020077baf2\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.377352 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "0e0aeef7-ccda-496c-ba2b-ca020077baf2" (UID: "0e0aeef7-ccda-496c-ba2b-ca020077baf2"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.377858 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "0e0aeef7-ccda-496c-ba2b-ca020077baf2" (UID: "0e0aeef7-ccda-496c-ba2b-ca020077baf2"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.382013 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "0e0aeef7-ccda-496c-ba2b-ca020077baf2" (UID: "0e0aeef7-ccda-496c-ba2b-ca020077baf2"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.383862 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e0aeef7-ccda-496c-ba2b-ca020077baf2" (UID: "0e0aeef7-ccda-496c-ba2b-ca020077baf2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.392221 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kube-api-access-55vhm" (OuterVolumeSpecName: "kube-api-access-55vhm") pod "0e0aeef7-ccda-496c-ba2b-ca020077baf2" (UID: "0e0aeef7-ccda-496c-ba2b-ca020077baf2"). InnerVolumeSpecName "kube-api-access-55vhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.397838 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "0e0aeef7-ccda-496c-ba2b-ca020077baf2" (UID: "0e0aeef7-ccda-496c-ba2b-ca020077baf2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.443400 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.477718 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.477779 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.477792 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55vhm\" (UniqueName: \"kubernetes.io/projected/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kube-api-access-55vhm\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.477806 4687 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.477818 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0e0aeef7-ccda-496c-ba2b-ca020077baf2-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.477829 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e0aeef7-ccda-496c-ba2b-ca020077baf2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.493085 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.579089 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxrmw\" (UniqueName: \"kubernetes.io/projected/eab13481-b0e4-40a4-8541-7738638251a9-kube-api-access-dxrmw\") pod \"eab13481-b0e4-40a4-8541-7738638251a9\" (UID: \"eab13481-b0e4-40a4-8541-7738638251a9\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.579541 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.582135 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab13481-b0e4-40a4-8541-7738638251a9-kube-api-access-dxrmw" (OuterVolumeSpecName: "kube-api-access-dxrmw") pod "eab13481-b0e4-40a4-8541-7738638251a9" (UID: "eab13481-b0e4-40a4-8541-7738638251a9"). InnerVolumeSpecName "kube-api-access-dxrmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.595510 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" event={"ID":"40f24838-c89e-4787-bd07-80871dd0bece","Type":"ContainerDied","Data":"98122d17e311c92ca0af403368b2059f21b7156ba1aab6c6f56f2d3c8cbc77b4"} Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.595590 4687 scope.go:117] "RemoveContainer" containerID="f0576bdc41772ccaaa997faa132f364b41a83b318123f7022c4916593ea8cd3a" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.595678 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone1184-account-delete-9tvfj" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.608306 4687 generic.go:334] "Generic (PLEG): container finished" podID="eab13481-b0e4-40a4-8541-7738638251a9" containerID="5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2" exitCode=0 Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.608373 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tnwzr" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.615029 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.624449 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee3a4967-773c-4106-955e-ce3823c96169" containerID="55e095bf402d4decfeb0d7eab9463616f714666ced8929276007bd2c6f82ed79" exitCode=0 Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.624501 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" path="/var/lib/kubelet/pods/33674fdf-dc91-46fd-a4d5-795ff7fd4211/volumes" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.625247 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d8e3aa-adce-49bd-8e29-a0adeea6009e" path="/var/lib/kubelet/pods/47d8e3aa-adce-49bd-8e29-a0adeea6009e/volumes" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.625996 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8040a852-f1a4-420b-9897-a1c71c5b231c" path="/var/lib/kubelet/pods/8040a852-f1a4-420b-9897-a1c71c5b231c/volumes" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.627235 4687 generic.go:334] "Generic (PLEG): container finished" podID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerID="a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6" exitCode=0 Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.627320 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.628664 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be44d699-42c9-4e7f-a533-8b39328ceedd" path="/var/lib/kubelet/pods/be44d699-42c9-4e7f-a533-8b39328ceedd/volumes" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.629363 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6787f12-c3f6-4611-b5b0-1b26155d4d41" path="/var/lib/kubelet/pods/f6787f12-c3f6-4611-b5b0-1b26155d4d41/volumes" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.630703 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tnwzr" event={"ID":"eab13481-b0e4-40a4-8541-7738638251a9","Type":"ContainerDied","Data":"5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2"} Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.630729 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tnwzr" event={"ID":"eab13481-b0e4-40a4-8541-7738638251a9","Type":"ContainerDied","Data":"e6d5b8ddcd14f246d4d608a0dafc5908716f80a66b9ddf3784bea871e54f6b82"} Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.630743 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" event={"ID":"e229e979-1176-4e84-9dab-1027aee52b34","Type":"ContainerDied","Data":"f1dfbb04b137d953d7dcb87137b305f1219fdaa6a1a779063d8de1984b77da47"} Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.630754 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"ee3a4967-773c-4106-955e-ce3823c96169","Type":"ContainerDied","Data":"55e095bf402d4decfeb0d7eab9463616f714666ced8929276007bd2c6f82ed79"} Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.630767 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"0e0aeef7-ccda-496c-ba2b-ca020077baf2","Type":"ContainerDied","Data":"a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6"} Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.630778 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"0e0aeef7-ccda-496c-ba2b-ca020077baf2","Type":"ContainerDied","Data":"607e21d95b9712f768e98cf260beda4f4809b83f85ec5348f12db51d2057e720"} Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.630796 4687 scope.go:117] "RemoveContainer" containerID="5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.659753 4687 scope.go:117] "RemoveContainer" containerID="5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2" Jan 31 07:15:52 crc kubenswrapper[4687]: E0131 07:15:51.660172 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2\": container with ID starting with 5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2 not found: ID does not exist" containerID="5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.660256 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2"} err="failed to get container status \"5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2\": rpc error: code = NotFound desc = could not find container \"5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2\": container with ID starting with 5ddf63cdfc9b6199fb6f313fd8c6282c70d20f24b0029732cfc2a93c7f6a22f2 not found: ID does not exist" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.660280 4687 scope.go:117] "RemoveContainer" containerID="042f494a78d21700df8fb39607568af9066a7e2d66ad07dff7bfc862061b9adf" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.688635 4687 scope.go:117] "RemoveContainer" containerID="a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.689304 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxrmw\" (UniqueName: \"kubernetes.io/projected/eab13481-b0e4-40a4-8541-7738638251a9-kube-api-access-dxrmw\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.695056 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5" podUID="e229e979-1176-4e84-9dab-1027aee52b34" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.705286 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-tnwzr"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.714471 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-index-tnwzr"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.720478 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone1184-account-delete-9tvfj"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.722515 4687 scope.go:117] "RemoveContainer" containerID="1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.725709 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone1184-account-delete-9tvfj"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.731061 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.735783 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.740336 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.745647 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-controller-manager-648b98dfd7-f6vp5"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.757611 4687 scope.go:117] "RemoveContainer" containerID="a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6" Jan 31 07:15:52 crc kubenswrapper[4687]: E0131 07:15:51.762546 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6\": container with ID starting with a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6 not found: ID does not exist" containerID="a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.762592 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6"} err="failed to get container status \"a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6\": rpc error: code = NotFound desc = could not find container \"a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6\": container with ID starting with a40bbb2d40b4c752d33842c414645100971102378c943031fff9fcb57cf917f6 not found: ID does not exist" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.762620 4687 scope.go:117] "RemoveContainer" containerID="1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84" Jan 31 07:15:52 crc kubenswrapper[4687]: E0131 07:15:51.764333 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84\": container with ID starting with 1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84 not found: ID does not exist" containerID="1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:51.764364 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84"} err="failed to get container status \"1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84\": rpc error: code = NotFound desc = could not find container \"1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84\": container with ID starting with 1a45e3150202b623290582f2038408c30b6380a28ba980e4c46a13e79e9cfe84 not found: ID does not exist" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.544541 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.642486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"ee3a4967-773c-4106-955e-ce3823c96169","Type":"ContainerDied","Data":"ab860b5f9af393d4d563cdd424c16d5a1108d135096f6503aa4ffc4004fed4df"} Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.642542 4687 scope.go:117] "RemoveContainer" containerID="55e095bf402d4decfeb0d7eab9463616f714666ced8929276007bd2c6f82ed79" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.642812 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.666864 4687 scope.go:117] "RemoveContainer" containerID="9e58ea79ce0d44062c43211417875f7802750794ea39a9d102294de4bf3d6c6c" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.701865 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-kolla-config\") pod \"ee3a4967-773c-4106-955e-ce3823c96169\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.702068 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee3a4967-773c-4106-955e-ce3823c96169-config-data-generated\") pod \"ee3a4967-773c-4106-955e-ce3823c96169\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.702164 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfsj6\" (UniqueName: \"kubernetes.io/projected/ee3a4967-773c-4106-955e-ce3823c96169-kube-api-access-rfsj6\") pod \"ee3a4967-773c-4106-955e-ce3823c96169\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.702263 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-operator-scripts\") pod \"ee3a4967-773c-4106-955e-ce3823c96169\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.702322 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-config-data-default\") pod \"ee3a4967-773c-4106-955e-ce3823c96169\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.703160 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ee3a4967-773c-4106-955e-ce3823c96169\" (UID: \"ee3a4967-773c-4106-955e-ce3823c96169\") " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.702512 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3a4967-773c-4106-955e-ce3823c96169-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "ee3a4967-773c-4106-955e-ce3823c96169" (UID: "ee3a4967-773c-4106-955e-ce3823c96169"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.702972 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "ee3a4967-773c-4106-955e-ce3823c96169" (UID: "ee3a4967-773c-4106-955e-ce3823c96169"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.703042 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ee3a4967-773c-4106-955e-ce3823c96169" (UID: "ee3a4967-773c-4106-955e-ce3823c96169"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.703087 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee3a4967-773c-4106-955e-ce3823c96169" (UID: "ee3a4967-773c-4106-955e-ce3823c96169"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.704002 4687 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.704026 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee3a4967-773c-4106-955e-ce3823c96169-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.704039 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.704050 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee3a4967-773c-4106-955e-ce3823c96169-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.707699 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3a4967-773c-4106-955e-ce3823c96169-kube-api-access-rfsj6" (OuterVolumeSpecName: "kube-api-access-rfsj6") pod "ee3a4967-773c-4106-955e-ce3823c96169" (UID: "ee3a4967-773c-4106-955e-ce3823c96169"). InnerVolumeSpecName "kube-api-access-rfsj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.713672 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "ee3a4967-773c-4106-955e-ce3823c96169" (UID: "ee3a4967-773c-4106-955e-ce3823c96169"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.805505 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfsj6\" (UniqueName: \"kubernetes.io/projected/ee3a4967-773c-4106-955e-ce3823c96169-kube-api-access-rfsj6\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.805572 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.823238 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.906655 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.972026 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 07:15:52 crc kubenswrapper[4687]: I0131 07:15:52.981612 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.586685 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft"] Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.587181 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" podUID="b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" containerName="manager" containerID="cri-o://aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7" gracePeriod=10 Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.612223 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" path="/var/lib/kubelet/pods/0e0aeef7-ccda-496c-ba2b-ca020077baf2/volumes" Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.613029 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f24838-c89e-4787-bd07-80871dd0bece" path="/var/lib/kubelet/pods/40f24838-c89e-4787-bd07-80871dd0bece/volumes" Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.613574 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e229e979-1176-4e84-9dab-1027aee52b34" path="/var/lib/kubelet/pods/e229e979-1176-4e84-9dab-1027aee52b34/volumes" Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.614850 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab13481-b0e4-40a4-8541-7738638251a9" path="/var/lib/kubelet/pods/eab13481-b0e4-40a4-8541-7738638251a9/volumes" Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.615449 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3a4967-773c-4106-955e-ce3823c96169" path="/var/lib/kubelet/pods/ee3a4967-773c-4106-955e-ce3823c96169/volumes" Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.789101 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-l54rp"] Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.789353 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-l54rp" podUID="18442ead-5a1c-4a1c-bb4d-fddf9434b284" containerName="registry-server" containerID="cri-o://419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259" gracePeriod=30 Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.834535 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v"] Jan 31 07:15:53 crc kubenswrapper[4687]: I0131 07:15:53.847488 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efflj5v"] Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.107132 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.224551 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv2hm\" (UniqueName: \"kubernetes.io/projected/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-kube-api-access-hv2hm\") pod \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.224654 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-webhook-cert\") pod \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.224748 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-apiservice-cert\") pod \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\" (UID: \"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c\") " Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.231164 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-kube-api-access-hv2hm" (OuterVolumeSpecName: "kube-api-access-hv2hm") pod "b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" (UID: "b6d9f75a-cf24-43c6-bfec-f21ea9edb53c"). InnerVolumeSpecName "kube-api-access-hv2hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.232436 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" (UID: "b6d9f75a-cf24-43c6-bfec-f21ea9edb53c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.232647 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" (UID: "b6d9f75a-cf24-43c6-bfec-f21ea9edb53c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.277652 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.326343 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.326389 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv2hm\" (UniqueName: \"kubernetes.io/projected/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-kube-api-access-hv2hm\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.326402 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.427596 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd42f\" (UniqueName: \"kubernetes.io/projected/18442ead-5a1c-4a1c-bb4d-fddf9434b284-kube-api-access-dd42f\") pod \"18442ead-5a1c-4a1c-bb4d-fddf9434b284\" (UID: \"18442ead-5a1c-4a1c-bb4d-fddf9434b284\") " Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.431134 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18442ead-5a1c-4a1c-bb4d-fddf9434b284-kube-api-access-dd42f" (OuterVolumeSpecName: "kube-api-access-dd42f") pod "18442ead-5a1c-4a1c-bb4d-fddf9434b284" (UID: "18442ead-5a1c-4a1c-bb4d-fddf9434b284"). InnerVolumeSpecName "kube-api-access-dd42f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.528921 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd42f\" (UniqueName: \"kubernetes.io/projected/18442ead-5a1c-4a1c-bb4d-fddf9434b284-kube-api-access-dd42f\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.663856 4687 generic.go:334] "Generic (PLEG): container finished" podID="18442ead-5a1c-4a1c-bb4d-fddf9434b284" containerID="419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259" exitCode=0 Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.663894 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-l54rp" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.663938 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-l54rp" event={"ID":"18442ead-5a1c-4a1c-bb4d-fddf9434b284","Type":"ContainerDied","Data":"419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259"} Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.663974 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-l54rp" event={"ID":"18442ead-5a1c-4a1c-bb4d-fddf9434b284","Type":"ContainerDied","Data":"3d517b5368af819299c6ca7c3bbbc68690869d0eb5807688740c8f3d5794c18f"} Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.663992 4687 scope.go:117] "RemoveContainer" containerID="419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.668176 4687 generic.go:334] "Generic (PLEG): container finished" podID="b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" containerID="aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7" exitCode=0 Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.668221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" event={"ID":"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c","Type":"ContainerDied","Data":"aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7"} Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.668277 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" event={"ID":"b6d9f75a-cf24-43c6-bfec-f21ea9edb53c","Type":"ContainerDied","Data":"14456a6935d0160108ffd66d5e60559fb57045bd9da663dabdbc39f5c8056c0d"} Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.668281 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.686534 4687 scope.go:117] "RemoveContainer" containerID="419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259" Jan 31 07:15:54 crc kubenswrapper[4687]: E0131 07:15:54.686922 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259\": container with ID starting with 419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259 not found: ID does not exist" containerID="419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.686965 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259"} err="failed to get container status \"419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259\": rpc error: code = NotFound desc = could not find container \"419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259\": container with ID starting with 419551a9e51d5ca62cca8d69760a7dd6433ed8518d462cad59f3dd7424c49259 not found: ID does not exist" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.686989 4687 scope.go:117] "RemoveContainer" containerID="aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.697793 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-l54rp"] Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.703300 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-l54rp"] Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.710903 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft"] Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.714967 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-cf47c99bb-vb9ft"] Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.722881 4687 scope.go:117] "RemoveContainer" containerID="aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7" Jan 31 07:15:54 crc kubenswrapper[4687]: E0131 07:15:54.723535 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7\": container with ID starting with aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7 not found: ID does not exist" containerID="aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7" Jan 31 07:15:54 crc kubenswrapper[4687]: I0131 07:15:54.723593 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7"} err="failed to get container status \"aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7\": rpc error: code = NotFound desc = could not find container \"aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7\": container with ID starting with aca9e422316c52020d83135248796b0e744514f24e8dec495442872d96cfabc7 not found: ID does not exist" Jan 31 07:15:55 crc kubenswrapper[4687]: I0131 07:15:55.612992 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18442ead-5a1c-4a1c-bb4d-fddf9434b284" path="/var/lib/kubelet/pods/18442ead-5a1c-4a1c-bb4d-fddf9434b284/volumes" Jan 31 07:15:55 crc kubenswrapper[4687]: I0131 07:15:55.613964 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e5d0709-195f-4511-897c-0dd7d15b5275" path="/var/lib/kubelet/pods/4e5d0709-195f-4511-897c-0dd7d15b5275/volumes" Jan 31 07:15:55 crc kubenswrapper[4687]: I0131 07:15:55.614780 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" path="/var/lib/kubelet/pods/b6d9f75a-cf24-43c6-bfec-f21ea9edb53c/volumes" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.060051 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh"] Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.060607 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" podUID="160706b4-005d-446d-a925-3849ab49f621" containerName="operator" containerID="cri-o://64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3" gracePeriod=10 Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.383633 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-vm9f7"] Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.383910 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" podUID="a46e651b-24d0-42a5-8b48-06a4d92da4ba" containerName="registry-server" containerID="cri-o://a4483730da4be1a3e88a3bbcfedc40262c125684356d31e04b335d704ff66a23" gracePeriod=30 Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.432191 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk"] Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.442866 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590qtfvk"] Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.591098 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.692806 4687 generic.go:334] "Generic (PLEG): container finished" podID="a46e651b-24d0-42a5-8b48-06a4d92da4ba" containerID="a4483730da4be1a3e88a3bbcfedc40262c125684356d31e04b335d704ff66a23" exitCode=0 Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.692906 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" event={"ID":"a46e651b-24d0-42a5-8b48-06a4d92da4ba","Type":"ContainerDied","Data":"a4483730da4be1a3e88a3bbcfedc40262c125684356d31e04b335d704ff66a23"} Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.694536 4687 generic.go:334] "Generic (PLEG): container finished" podID="160706b4-005d-446d-a925-3849ab49f621" containerID="64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3" exitCode=0 Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.694562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" event={"ID":"160706b4-005d-446d-a925-3849ab49f621","Type":"ContainerDied","Data":"64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3"} Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.694578 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" event={"ID":"160706b4-005d-446d-a925-3849ab49f621","Type":"ContainerDied","Data":"35c73975af7c5becfe2593054c247b3f532741ffa35c59349c0d238764b25ffa"} Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.694596 4687 scope.go:117] "RemoveContainer" containerID="64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.694686 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.734380 4687 scope.go:117] "RemoveContainer" containerID="64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3" Jan 31 07:15:56 crc kubenswrapper[4687]: E0131 07:15:56.736038 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3\": container with ID starting with 64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3 not found: ID does not exist" containerID="64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.736101 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3"} err="failed to get container status \"64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3\": rpc error: code = NotFound desc = could not find container \"64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3\": container with ID starting with 64a6d598d1ed865dd2e3d259d1aaead2b412820513214f29912ad30bdf636cc3 not found: ID does not exist" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.756330 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsn6j\" (UniqueName: \"kubernetes.io/projected/160706b4-005d-446d-a925-3849ab49f621-kube-api-access-qsn6j\") pod \"160706b4-005d-446d-a925-3849ab49f621\" (UID: \"160706b4-005d-446d-a925-3849ab49f621\") " Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.778726 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160706b4-005d-446d-a925-3849ab49f621-kube-api-access-qsn6j" (OuterVolumeSpecName: "kube-api-access-qsn6j") pod "160706b4-005d-446d-a925-3849ab49f621" (UID: "160706b4-005d-446d-a925-3849ab49f621"). InnerVolumeSpecName "kube-api-access-qsn6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.849528 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.858348 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsn6j\" (UniqueName: \"kubernetes.io/projected/160706b4-005d-446d-a925-3849ab49f621-kube-api-access-qsn6j\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.959837 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjp5l\" (UniqueName: \"kubernetes.io/projected/a46e651b-24d0-42a5-8b48-06a4d92da4ba-kube-api-access-xjp5l\") pod \"a46e651b-24d0-42a5-8b48-06a4d92da4ba\" (UID: \"a46e651b-24d0-42a5-8b48-06a4d92da4ba\") " Jan 31 07:15:56 crc kubenswrapper[4687]: I0131 07:15:56.962911 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46e651b-24d0-42a5-8b48-06a4d92da4ba-kube-api-access-xjp5l" (OuterVolumeSpecName: "kube-api-access-xjp5l") pod "a46e651b-24d0-42a5-8b48-06a4d92da4ba" (UID: "a46e651b-24d0-42a5-8b48-06a4d92da4ba"). InnerVolumeSpecName "kube-api-access-xjp5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.022647 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh"] Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.042448 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-n25lh"] Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.061573 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjp5l\" (UniqueName: \"kubernetes.io/projected/a46e651b-24d0-42a5-8b48-06a4d92da4ba-kube-api-access-xjp5l\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.612676 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160706b4-005d-446d-a925-3849ab49f621" path="/var/lib/kubelet/pods/160706b4-005d-446d-a925-3849ab49f621/volumes" Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.613334 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7" path="/var/lib/kubelet/pods/cf47f9f6-c1ba-43ec-be66-a9aa4ca4afc7/volumes" Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.704109 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" event={"ID":"a46e651b-24d0-42a5-8b48-06a4d92da4ba","Type":"ContainerDied","Data":"8af2ad8b871f4bc94135054c239c090b59a6905ef5ed49395da95de323cca6da"} Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.704158 4687 scope.go:117] "RemoveContainer" containerID="a4483730da4be1a3e88a3bbcfedc40262c125684356d31e04b335d704ff66a23" Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.704243 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-vm9f7" Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.726720 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-vm9f7"] Jan 31 07:15:57 crc kubenswrapper[4687]: I0131 07:15:57.731748 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-vm9f7"] Jan 31 07:15:58 crc kubenswrapper[4687]: I0131 07:15:58.663334 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf"] Jan 31 07:15:58 crc kubenswrapper[4687]: I0131 07:15:58.663856 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" podUID="d8461d3e-8187-48d8-bdc5-1f97545dc6d5" containerName="manager" containerID="cri-o://0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58" gracePeriod=10 Jan 31 07:15:58 crc kubenswrapper[4687]: I0131 07:15:58.684215 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:15:58 crc kubenswrapper[4687]: I0131 07:15:58.684284 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:15:58 crc kubenswrapper[4687]: I0131 07:15:58.815891 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-6cpr7"] Jan 31 07:15:58 crc kubenswrapper[4687]: I0131 07:15:58.816235 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-6cpr7" podUID="f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c" containerName="registry-server" containerID="cri-o://33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9" gracePeriod=30 Jan 31 07:15:58 crc kubenswrapper[4687]: I0131 07:15:58.874057 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6"] Jan 31 07:15:58 crc kubenswrapper[4687]: I0131 07:15:58.878835 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576fsjw6"] Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.231295 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.392897 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pccqz\" (UniqueName: \"kubernetes.io/projected/f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c-kube-api-access-pccqz\") pod \"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c\" (UID: \"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c\") " Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.401890 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c-kube-api-access-pccqz" (OuterVolumeSpecName: "kube-api-access-pccqz") pod "f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c" (UID: "f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c"). InnerVolumeSpecName "kube-api-access-pccqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.494379 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pccqz\" (UniqueName: \"kubernetes.io/projected/f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c-kube-api-access-pccqz\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.575871 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.612165 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438b0249-f9e2-4627-91ae-313342bdd172" path="/var/lib/kubelet/pods/438b0249-f9e2-4627-91ae-313342bdd172/volumes" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.613133 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46e651b-24d0-42a5-8b48-06a4d92da4ba" path="/var/lib/kubelet/pods/a46e651b-24d0-42a5-8b48-06a4d92da4ba/volumes" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.696576 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgjt5\" (UniqueName: \"kubernetes.io/projected/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-kube-api-access-zgjt5\") pod \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.696737 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-apiservice-cert\") pod \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.696837 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-webhook-cert\") pod \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\" (UID: \"d8461d3e-8187-48d8-bdc5-1f97545dc6d5\") " Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.700090 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "d8461d3e-8187-48d8-bdc5-1f97545dc6d5" (UID: "d8461d3e-8187-48d8-bdc5-1f97545dc6d5"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.700197 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d8461d3e-8187-48d8-bdc5-1f97545dc6d5" (UID: "d8461d3e-8187-48d8-bdc5-1f97545dc6d5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.700200 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-kube-api-access-zgjt5" (OuterVolumeSpecName: "kube-api-access-zgjt5") pod "d8461d3e-8187-48d8-bdc5-1f97545dc6d5" (UID: "d8461d3e-8187-48d8-bdc5-1f97545dc6d5"). InnerVolumeSpecName "kube-api-access-zgjt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.725676 4687 generic.go:334] "Generic (PLEG): container finished" podID="d8461d3e-8187-48d8-bdc5-1f97545dc6d5" containerID="0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58" exitCode=0 Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.725720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" event={"ID":"d8461d3e-8187-48d8-bdc5-1f97545dc6d5","Type":"ContainerDied","Data":"0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58"} Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.725805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" event={"ID":"d8461d3e-8187-48d8-bdc5-1f97545dc6d5","Type":"ContainerDied","Data":"01e91627b9caab8674b9509c0c3754e569cac6f1765c19f055fc9a60feb103ad"} Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.725829 4687 scope.go:117] "RemoveContainer" containerID="0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.725741 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.728317 4687 generic.go:334] "Generic (PLEG): container finished" podID="f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c" containerID="33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9" exitCode=0 Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.728360 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-6cpr7" event={"ID":"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c","Type":"ContainerDied","Data":"33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9"} Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.728395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-6cpr7" event={"ID":"f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c","Type":"ContainerDied","Data":"2a6843d6686c923b2dbcc076461f24cf7176f8eca079df83e16a0fced74fac6c"} Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.728472 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-6cpr7" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.747318 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-6cpr7"] Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.753841 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-6cpr7"] Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.754366 4687 scope.go:117] "RemoveContainer" containerID="0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58" Jan 31 07:15:59 crc kubenswrapper[4687]: E0131 07:15:59.754975 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58\": container with ID starting with 0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58 not found: ID does not exist" containerID="0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.755072 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58"} err="failed to get container status \"0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58\": rpc error: code = NotFound desc = could not find container \"0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58\": container with ID starting with 0b81b08f07ce40d2b31e152d24ea6f74dd6a8172516ca49b9692aed081c6ad58 not found: ID does not exist" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.755149 4687 scope.go:117] "RemoveContainer" containerID="33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.766261 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf"] Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.775450 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-64596d49b-mdfmf"] Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.781494 4687 scope.go:117] "RemoveContainer" containerID="33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9" Jan 31 07:15:59 crc kubenswrapper[4687]: E0131 07:15:59.782026 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9\": container with ID starting with 33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9 not found: ID does not exist" containerID="33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.782070 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9"} err="failed to get container status \"33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9\": rpc error: code = NotFound desc = could not find container \"33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9\": container with ID starting with 33f17f80d53674c3e3aad38efd24ddac5dfcf4f4e60a1d1c6e7dee1544f9cfd9 not found: ID does not exist" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.798702 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.798742 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgjt5\" (UniqueName: \"kubernetes.io/projected/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-kube-api-access-zgjt5\") on node \"crc\" DevicePath \"\"" Jan 31 07:15:59 crc kubenswrapper[4687]: I0131 07:15:59.798760 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8461d3e-8187-48d8-bdc5-1f97545dc6d5-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.077013 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv"] Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.077257 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" podUID="2f7bf014-81af-465e-a08f-f9a1dc8a7383" containerName="manager" containerID="cri-o://149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f" gracePeriod=10 Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.303714 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-7rd2t"] Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.303931 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-7rd2t" podUID="b26b5ca8-6e8a-41f4-bf71-822aef1f73bf" containerName="registry-server" containerID="cri-o://fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e" gracePeriod=30 Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.321265 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm"] Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.327704 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vmqnm"] Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.539663 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.715669 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgb5p\" (UniqueName: \"kubernetes.io/projected/2f7bf014-81af-465e-a08f-f9a1dc8a7383-kube-api-access-hgb5p\") pod \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.715732 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-apiservice-cert\") pod \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.715776 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-webhook-cert\") pod \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\" (UID: \"2f7bf014-81af-465e-a08f-f9a1dc8a7383\") " Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.721564 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "2f7bf014-81af-465e-a08f-f9a1dc8a7383" (UID: "2f7bf014-81af-465e-a08f-f9a1dc8a7383"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.721711 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7bf014-81af-465e-a08f-f9a1dc8a7383-kube-api-access-hgb5p" (OuterVolumeSpecName: "kube-api-access-hgb5p") pod "2f7bf014-81af-465e-a08f-f9a1dc8a7383" (UID: "2f7bf014-81af-465e-a08f-f9a1dc8a7383"). InnerVolumeSpecName "kube-api-access-hgb5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.722256 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.723335 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2f7bf014-81af-465e-a08f-f9a1dc8a7383" (UID: "2f7bf014-81af-465e-a08f-f9a1dc8a7383"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.740286 4687 generic.go:334] "Generic (PLEG): container finished" podID="2f7bf014-81af-465e-a08f-f9a1dc8a7383" containerID="149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f" exitCode=0 Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.740469 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.740762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" event={"ID":"2f7bf014-81af-465e-a08f-f9a1dc8a7383","Type":"ContainerDied","Data":"149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f"} Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.740868 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv" event={"ID":"2f7bf014-81af-465e-a08f-f9a1dc8a7383","Type":"ContainerDied","Data":"d2f8a108c7c4ebab9518c9a6c9bd5820050542bd8e340623cdf100ccabb29418"} Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.740929 4687 scope.go:117] "RemoveContainer" containerID="149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.764049 4687 generic.go:334] "Generic (PLEG): container finished" podID="b26b5ca8-6e8a-41f4-bf71-822aef1f73bf" containerID="fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e" exitCode=0 Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.764094 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-7rd2t" event={"ID":"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf","Type":"ContainerDied","Data":"fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e"} Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.764123 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-7rd2t" event={"ID":"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf","Type":"ContainerDied","Data":"61984704ec68fd83e876e8eece6f46f6fd73dc003d27bfa0e590c31c4eecdc62"} Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.764172 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-7rd2t" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.773724 4687 scope.go:117] "RemoveContainer" containerID="149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f" Jan 31 07:16:00 crc kubenswrapper[4687]: E0131 07:16:00.776419 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f\": container with ID starting with 149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f not found: ID does not exist" containerID="149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.776478 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f"} err="failed to get container status \"149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f\": rpc error: code = NotFound desc = could not find container \"149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f\": container with ID starting with 149a0a77f01ed0d6fcc35f7c597a832936ffd194ba3f6c3b03b53d87e5cd377f not found: ID does not exist" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.776506 4687 scope.go:117] "RemoveContainer" containerID="fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.818136 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfsrx\" (UniqueName: \"kubernetes.io/projected/b26b5ca8-6e8a-41f4-bf71-822aef1f73bf-kube-api-access-tfsrx\") pod \"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf\" (UID: \"b26b5ca8-6e8a-41f4-bf71-822aef1f73bf\") " Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.818340 4687 scope.go:117] "RemoveContainer" containerID="fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.818846 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgb5p\" (UniqueName: \"kubernetes.io/projected/2f7bf014-81af-465e-a08f-f9a1dc8a7383-kube-api-access-hgb5p\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.818881 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.818895 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f7bf014-81af-465e-a08f-f9a1dc8a7383-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:00 crc kubenswrapper[4687]: E0131 07:16:00.818972 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e\": container with ID starting with fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e not found: ID does not exist" containerID="fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.819021 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e"} err="failed to get container status \"fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e\": rpc error: code = NotFound desc = could not find container \"fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e\": container with ID starting with fe2500c2359c71955ed432824aab11abd2f10c5850ec93e1a9ec75d5c04b517e not found: ID does not exist" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.823694 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26b5ca8-6e8a-41f4-bf71-822aef1f73bf-kube-api-access-tfsrx" (OuterVolumeSpecName: "kube-api-access-tfsrx") pod "b26b5ca8-6e8a-41f4-bf71-822aef1f73bf" (UID: "b26b5ca8-6e8a-41f4-bf71-822aef1f73bf"). InnerVolumeSpecName "kube-api-access-tfsrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.827959 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv"] Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.833796 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-8d596dc7f-pc8lv"] Jan 31 07:16:00 crc kubenswrapper[4687]: I0131 07:16:00.920803 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfsrx\" (UniqueName: \"kubernetes.io/projected/b26b5ca8-6e8a-41f4-bf71-822aef1f73bf-kube-api-access-tfsrx\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:01 crc kubenswrapper[4687]: I0131 07:16:01.092572 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-7rd2t"] Jan 31 07:16:01 crc kubenswrapper[4687]: I0131 07:16:01.097085 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-7rd2t"] Jan 31 07:16:01 crc kubenswrapper[4687]: I0131 07:16:01.619345 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7bf014-81af-465e-a08f-f9a1dc8a7383" path="/var/lib/kubelet/pods/2f7bf014-81af-465e-a08f-f9a1dc8a7383/volumes" Jan 31 07:16:01 crc kubenswrapper[4687]: I0131 07:16:01.619800 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26b5ca8-6e8a-41f4-bf71-822aef1f73bf" path="/var/lib/kubelet/pods/b26b5ca8-6e8a-41f4-bf71-822aef1f73bf/volumes" Jan 31 07:16:01 crc kubenswrapper[4687]: I0131 07:16:01.620255 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7e9c35-2d20-496b-bc0b-965d64cbd140" path="/var/lib/kubelet/pods/ce7e9c35-2d20-496b-bc0b-965d64cbd140/volumes" Jan 31 07:16:01 crc kubenswrapper[4687]: I0131 07:16:01.621370 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8461d3e-8187-48d8-bdc5-1f97545dc6d5" path="/var/lib/kubelet/pods/d8461d3e-8187-48d8-bdc5-1f97545dc6d5/volumes" Jan 31 07:16:01 crc kubenswrapper[4687]: I0131 07:16:01.621791 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c" path="/var/lib/kubelet/pods/f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c/volumes" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.348099 4687 scope.go:117] "RemoveContainer" containerID="a3fdf27497e89e3ded758842f01e975fbc68d28dac4c38c41f91d62c5d4bab96" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.367432 4687 scope.go:117] "RemoveContainer" containerID="62112938a0f6beba916d6eb94597064a766027b189ac6b309f2ff9091fa3d445" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.402271 4687 scope.go:117] "RemoveContainer" containerID="1ad6b47970d554bb8de23733521e8dc86ef8a4c06cccf8798956f3d26d565031" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.429604 4687 scope.go:117] "RemoveContainer" containerID="30413941ed0d998a434f2de78224017bf1e3e9c012db7f20228412a582b6b2be" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.447190 4687 scope.go:117] "RemoveContainer" containerID="65d71177a50b9a069065816151af961c9cd5dd25d44a03ab695c30380b1ae4f4" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.467959 4687 scope.go:117] "RemoveContainer" containerID="10ce8385a1c96b6fa17884b6b553750bff500d1b9ed3bde539703af5b29d9260" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.490817 4687 scope.go:117] "RemoveContainer" containerID="69fa6ca1f95a368a0edc97c59b35e4695761b243b8efc226918947e098854a57" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.513261 4687 scope.go:117] "RemoveContainer" containerID="3e581ba7c64f41139008f517bbfeea52a5527209dc56d4aeb78e6e3256a7e59f" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.551348 4687 scope.go:117] "RemoveContainer" containerID="7841fe3e7d07959a672a1a0f6799e03dd0adf7211d2563b1d278f8c19040034d" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.568474 4687 scope.go:117] "RemoveContainer" containerID="89329f80cc98acba809d1e66423207645be5bd9e81f2673e543f48e946e636d0" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.602908 4687 scope.go:117] "RemoveContainer" containerID="02048f9c545652d2634e3d612cc6ff23ac5893e7fbacce12160bba72ecc11c7b" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.619673 4687 scope.go:117] "RemoveContainer" containerID="9e6de3dca85b2c8d2b5004d500c0909275a4a8ed86e5d6c234667a84700b4556" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.635122 4687 scope.go:117] "RemoveContainer" containerID="86633f3ea008c8a5db815b52a02c61285b1779f25c9c1cca6ebd20c265f01ff9" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.652014 4687 scope.go:117] "RemoveContainer" containerID="c7bb073dc7f769dd63a4792ed024ae1b02144faef1b1ab6829129879b46af964" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.668294 4687 scope.go:117] "RemoveContainer" containerID="04b0c2eff28c24a85d20addbeb930b8bc419b0a38c1c266149441dafdb5ecbfa" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.683964 4687 scope.go:117] "RemoveContainer" containerID="4db508992b773dc1480fd79bf37f830fce67e47dfba2db6ff3e9ffc433880836" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.700308 4687 scope.go:117] "RemoveContainer" containerID="d625eec42cf19dd45cd0c28cc2ca9d21b9dbcd13f3cd3629aeb6dd37a654d22d" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.722621 4687 scope.go:117] "RemoveContainer" containerID="7ca01d48dbe92fb5a08f8e95f98f23cc491fb770dcba5ff32ffb86bf7778d0a3" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.737485 4687 scope.go:117] "RemoveContainer" containerID="f3dac2f344ac8587ce79cc87e2f06f5b5cdba47a51ed5d45dee28cfa391fed31" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.751961 4687 scope.go:117] "RemoveContainer" containerID="9e70ab5a3efc6abd4f783aeeb7bb94ee1f9ec80dd3ab2d38c9f0b54ee56d021b" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.775886 4687 scope.go:117] "RemoveContainer" containerID="852a1aca08758c98de5c971f20ee29e97affdf43fcb33f46751b7551f0b07044" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.801620 4687 scope.go:117] "RemoveContainer" containerID="60d6618f95692b0b3804ab25bf0dfc4b23400823fad87c30a2ee78a94721869a" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.819844 4687 scope.go:117] "RemoveContainer" containerID="425544d1e116c741e09a69d5d1ebfcf1c1299fa94ee06c8ccaeb707c8a7ea626" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.835374 4687 scope.go:117] "RemoveContainer" containerID="99a0276c7b1ffbc131d0da854990fcb0a24f2905e77e23c70ea6a702b971e7b5" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.850724 4687 scope.go:117] "RemoveContainer" containerID="e3336cb60c46d96b899d59c9b6ce3d2a13ae7b11bfae8a5041e1cb251a81075c" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.904109 4687 scope.go:117] "RemoveContainer" containerID="884427cf10f24ae8fde8b7a03cb7c0e32b59f6c75ebf880e7417330619486825" Jan 31 07:16:07 crc kubenswrapper[4687]: I0131 07:16:07.922826 4687 scope.go:117] "RemoveContainer" containerID="e31853d1171bb667a8c8d62c8006125f8b1f7f1227d797a963b974c8980cc85c" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.693315 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g2ng4/must-gather-twkvd"] Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.693916 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerName="mysql-bootstrap" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.693928 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerName="mysql-bootstrap" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.693941 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be44d699-42c9-4e7f-a533-8b39328ceedd" containerName="keystone-api" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.693948 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="be44d699-42c9-4e7f-a533-8b39328ceedd" containerName="keystone-api" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.693958 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6787f12-c3f6-4611-b5b0-1b26155d4d41" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.693963 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6787f12-c3f6-4611-b5b0-1b26155d4d41" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.693972 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerName="mysql-bootstrap" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.693977 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerName="mysql-bootstrap" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.693984 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3a4967-773c-4106-955e-ce3823c96169" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.693990 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3a4967-773c-4106-955e-ce3823c96169" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.693996 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160706b4-005d-446d-a925-3849ab49f621" containerName="operator" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694002 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="160706b4-005d-446d-a925-3849ab49f621" containerName="operator" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694013 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26b5ca8-6e8a-41f4-bf71-822aef1f73bf" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694019 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26b5ca8-6e8a-41f4-bf71-822aef1f73bf" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694025 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3a4967-773c-4106-955e-ce3823c96169" containerName="mysql-bootstrap" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694031 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3a4967-773c-4106-955e-ce3823c96169" containerName="mysql-bootstrap" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694041 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8461d3e-8187-48d8-bdc5-1f97545dc6d5" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694046 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8461d3e-8187-48d8-bdc5-1f97545dc6d5" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694052 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18442ead-5a1c-4a1c-bb4d-fddf9434b284" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694058 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="18442ead-5a1c-4a1c-bb4d-fddf9434b284" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694067 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694073 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694083 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46e651b-24d0-42a5-8b48-06a4d92da4ba" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694088 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46e651b-24d0-42a5-8b48-06a4d92da4ba" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694097 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerName="rabbitmq" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694102 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerName="rabbitmq" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694113 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694118 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694126 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f24838-c89e-4787-bd07-80871dd0bece" containerName="mariadb-account-delete" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694132 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f24838-c89e-4787-bd07-80871dd0bece" containerName="mariadb-account-delete" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694142 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerName="setup-container" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694147 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerName="setup-container" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694156 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694161 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694169 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e229e979-1176-4e84-9dab-1027aee52b34" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694174 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e229e979-1176-4e84-9dab-1027aee52b34" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694184 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694190 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694199 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab13481-b0e4-40a4-8541-7738638251a9" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694204 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab13481-b0e4-40a4-8541-7738638251a9" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694213 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" containerName="memcached" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694218 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" containerName="memcached" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694227 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d8e3aa-adce-49bd-8e29-a0adeea6009e" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694234 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d8e3aa-adce-49bd-8e29-a0adeea6009e" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694240 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7bf014-81af-465e-a08f-f9a1dc8a7383" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694246 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7bf014-81af-465e-a08f-f9a1dc8a7383" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: E0131 07:16:13.694253 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f24838-c89e-4787-bd07-80871dd0bece" containerName="mariadb-account-delete" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694259 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f24838-c89e-4787-bd07-80871dd0bece" containerName="mariadb-account-delete" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694347 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0aeef7-ccda-496c-ba2b-ca020077baf2" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694358 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="33674fdf-dc91-46fd-a4d5-795ff7fd4211" containerName="rabbitmq" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694364 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d9f75a-cf24-43c6-bfec-f21ea9edb53c" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694373 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6787f12-c3f6-4611-b5b0-1b26155d4d41" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694380 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="160706b4-005d-446d-a925-3849ab49f621" containerName="operator" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694388 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46e651b-24d0-42a5-8b48-06a4d92da4ba" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694394 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="18442ead-5a1c-4a1c-bb4d-fddf9434b284" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694400 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26b5ca8-6e8a-41f4-bf71-822aef1f73bf" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694421 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f24838-c89e-4787-bd07-80871dd0bece" containerName="mariadb-account-delete" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694429 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2b69a6-aae5-4e0c-8fc9-66a9f748b7b6" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694437 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f677d5b2-044a-4fd2-9f4c-8d4c9dc6b23c" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694444 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8461d3e-8187-48d8-bdc5-1f97545dc6d5" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694452 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab13481-b0e4-40a4-8541-7738638251a9" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694459 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e229e979-1176-4e84-9dab-1027aee52b34" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694466 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7186f0a0-8f6a-465e-b18d-be6b3b28d1c8" containerName="memcached" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694475 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3a4967-773c-4106-955e-ce3823c96169" containerName="galera" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694482 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f24838-c89e-4787-bd07-80871dd0bece" containerName="mariadb-account-delete" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694491 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7bf014-81af-465e-a08f-f9a1dc8a7383" containerName="manager" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694498 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="be44d699-42c9-4e7f-a533-8b39328ceedd" containerName="keystone-api" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.694504 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d8e3aa-adce-49bd-8e29-a0adeea6009e" containerName="registry-server" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.695044 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.697248 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-g2ng4"/"default-dockercfg-c24fx" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.698007 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g2ng4"/"kube-root-ca.crt" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.698856 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g2ng4"/"openshift-service-ca.crt" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.715854 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g2ng4/must-gather-twkvd"] Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.785164 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gql5r\" (UniqueName: \"kubernetes.io/projected/b50a6b30-d3a3-4933-8bf2-f520e564d386-kube-api-access-gql5r\") pod \"must-gather-twkvd\" (UID: \"b50a6b30-d3a3-4933-8bf2-f520e564d386\") " pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.785239 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b50a6b30-d3a3-4933-8bf2-f520e564d386-must-gather-output\") pod \"must-gather-twkvd\" (UID: \"b50a6b30-d3a3-4933-8bf2-f520e564d386\") " pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.886214 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gql5r\" (UniqueName: \"kubernetes.io/projected/b50a6b30-d3a3-4933-8bf2-f520e564d386-kube-api-access-gql5r\") pod \"must-gather-twkvd\" (UID: \"b50a6b30-d3a3-4933-8bf2-f520e564d386\") " pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.886274 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b50a6b30-d3a3-4933-8bf2-f520e564d386-must-gather-output\") pod \"must-gather-twkvd\" (UID: \"b50a6b30-d3a3-4933-8bf2-f520e564d386\") " pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.887379 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b50a6b30-d3a3-4933-8bf2-f520e564d386-must-gather-output\") pod \"must-gather-twkvd\" (UID: \"b50a6b30-d3a3-4933-8bf2-f520e564d386\") " pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.906182 4687 generic.go:334] "Generic (PLEG): container finished" podID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerID="87120a710046f2e75116a16c4179bf49847f21569c6c405cde1ad7b2f9011407" exitCode=137 Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.906238 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"87120a710046f2e75116a16c4179bf49847f21569c6c405cde1ad7b2f9011407"} Jan 31 07:16:13 crc kubenswrapper[4687]: I0131 07:16:13.927783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gql5r\" (UniqueName: \"kubernetes.io/projected/b50a6b30-d3a3-4933-8bf2-f520e564d386-kube-api-access-gql5r\") pod \"must-gather-twkvd\" (UID: \"b50a6b30-d3a3-4933-8bf2-f520e564d386\") " pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.011503 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.169900 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.290756 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.290841 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5ckk\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-kube-api-access-s5ckk\") pod \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.290900 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-lock\") pod \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.290977 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") pod \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.291030 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-cache\") pod \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\" (UID: \"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0\") " Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.292082 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-cache" (OuterVolumeSpecName: "cache") pod "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.292099 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-lock" (OuterVolumeSpecName: "lock") pod "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.295533 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.295587 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "swift") pod "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.295586 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-kube-api-access-s5ckk" (OuterVolumeSpecName: "kube-api-access-s5ckk") pod "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" (UID: "4f3169d5-4ca5-47e8-a6a4-b34705f30dd0"). InnerVolumeSpecName "kube-api-access-s5ckk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.392960 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.393025 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5ckk\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-kube-api-access-s5ckk\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.393042 4687 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-lock\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.393053 4687 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.393064 4687 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0-cache\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.404844 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.447823 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g2ng4/must-gather-twkvd"] Jan 31 07:16:14 crc kubenswrapper[4687]: W0131 07:16:14.453291 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb50a6b30_d3a3_4933_8bf2_f520e564d386.slice/crio-1880a4b8bd5c1ca1010d9b94b543d26aa7ab6f247a60f29e9fdcdec8e9c3e563 WatchSource:0}: Error finding container 1880a4b8bd5c1ca1010d9b94b543d26aa7ab6f247a60f29e9fdcdec8e9c3e563: Status 404 returned error can't find the container with id 1880a4b8bd5c1ca1010d9b94b543d26aa7ab6f247a60f29e9fdcdec8e9c3e563 Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.494704 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.911797 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g2ng4/must-gather-twkvd" event={"ID":"b50a6b30-d3a3-4933-8bf2-f520e564d386","Type":"ContainerStarted","Data":"1880a4b8bd5c1ca1010d9b94b543d26aa7ab6f247a60f29e9fdcdec8e9c3e563"} Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.918216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"4f3169d5-4ca5-47e8-a6a4-b34705f30dd0","Type":"ContainerDied","Data":"6b55cba56e12adbea9787c4e6c7f8b2a1f18b60750f0b59439ea298fede50957"} Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.918270 4687 scope.go:117] "RemoveContainer" containerID="87120a710046f2e75116a16c4179bf49847f21569c6c405cde1ad7b2f9011407" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.918294 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.935639 4687 scope.go:117] "RemoveContainer" containerID="3769f301e625ab3cce3a06cc29e9d5f5bb2ae84bd6b08ca2cb7bb3f7aabb6511" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.950516 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.951189 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.958176 4687 scope.go:117] "RemoveContainer" containerID="087709f07a16a8956cad97cec775636bfa983adaa6627cebd8289db5e77fc582" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.970950 4687 scope.go:117] "RemoveContainer" containerID="462d03384382a6f3fb4523829751723bfeacf1bcf107bf6627d59de69d3cc69c" Jan 31 07:16:14 crc kubenswrapper[4687]: I0131 07:16:14.984506 4687 scope.go:117] "RemoveContainer" containerID="067116e8aa6dadfeb22d2c041ee5c818ebc935d4f59ceeefd77867071352b8cb" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.000538 4687 scope.go:117] "RemoveContainer" containerID="829eb8a3a323c6c98f85abad5a6e6c8ae17563e61b17350c95f76c0df7a70f82" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.015111 4687 scope.go:117] "RemoveContainer" containerID="07418b09ea9b43e2f4b1393bd07f96ae9987062bed63bf2dcc8bd66e1db90bc0" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.029581 4687 scope.go:117] "RemoveContainer" containerID="29971351b38387c34c20fe50e6de67979f4bc9723a1be93feef1492db50a6d31" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.042275 4687 scope.go:117] "RemoveContainer" containerID="3ab4ab844783fa31daf1c1eed13d6cad654b268a5cebed800beb83b2b4076a10" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.056395 4687 scope.go:117] "RemoveContainer" containerID="1de988ae783d7ef322b32e03cec233e8d6a73b90c66b17400298df3da2c6bba3" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.070261 4687 scope.go:117] "RemoveContainer" containerID="250db73b99466a6d136c29b5ddb443fea1455c9b3f051000bc5c30d2a3dcac0d" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.085579 4687 scope.go:117] "RemoveContainer" containerID="dc059a4299aaa5e0039676b11749b1ff11d523783abb720b1db4fca1b57d8a02" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.099979 4687 scope.go:117] "RemoveContainer" containerID="30c8a9046e479dd3d4719b5b38bd785ecc1a69005467729281cf8324e096a6d8" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.113841 4687 scope.go:117] "RemoveContainer" containerID="502b54aa63f153278d1af53d6e2ef57ee86668bc1ca4b9331e43f7e1d8fcdd51" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.160525 4687 scope.go:117] "RemoveContainer" containerID="57255eff28aadc0f504b048b696e5785a65bddda1c04167b42793b0ae630f5f8" Jan 31 07:16:15 crc kubenswrapper[4687]: I0131 07:16:15.623726 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" path="/var/lib/kubelet/pods/4f3169d5-4ca5-47e8-a6a4-b34705f30dd0/volumes" Jan 31 07:16:18 crc kubenswrapper[4687]: I0131 07:16:18.957421 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g2ng4/must-gather-twkvd" event={"ID":"b50a6b30-d3a3-4933-8bf2-f520e564d386","Type":"ContainerStarted","Data":"d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437"} Jan 31 07:16:18 crc kubenswrapper[4687]: I0131 07:16:18.957774 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g2ng4/must-gather-twkvd" event={"ID":"b50a6b30-d3a3-4933-8bf2-f520e564d386","Type":"ContainerStarted","Data":"bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513"} Jan 31 07:16:18 crc kubenswrapper[4687]: I0131 07:16:18.976255 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-g2ng4/must-gather-twkvd" podStartSLOduration=2.36276836 podStartE2EDuration="5.976235733s" podCreationTimestamp="2026-01-31 07:16:13 +0000 UTC" firstStartedPulling="2026-01-31 07:16:14.455184387 +0000 UTC m=+2000.732443952" lastFinishedPulling="2026-01-31 07:16:18.06865175 +0000 UTC m=+2004.345911325" observedRunningTime="2026-01-31 07:16:18.971936126 +0000 UTC m=+2005.249195701" watchObservedRunningTime="2026-01-31 07:16:18.976235733 +0000 UTC m=+2005.253495308" Jan 31 07:16:22 crc kubenswrapper[4687]: E0131 07:16:22.705317 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:16:22 crc kubenswrapper[4687]: E0131 07:16:22.705641 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:23.205623522 +0000 UTC m=+2009.482883097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:16:22 crc kubenswrapper[4687]: E0131 07:16:22.705357 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:16:22 crc kubenswrapper[4687]: E0131 07:16:22.705780 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:23.205762186 +0000 UTC m=+2009.483021761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:16:23 crc kubenswrapper[4687]: E0131 07:16:23.210634 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:16:23 crc kubenswrapper[4687]: E0131 07:16:23.210996 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:24.210975378 +0000 UTC m=+2010.488234973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:16:23 crc kubenswrapper[4687]: E0131 07:16:23.211219 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:16:23 crc kubenswrapper[4687]: E0131 07:16:23.211371 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:24.211355448 +0000 UTC m=+2010.488615033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:16:24 crc kubenswrapper[4687]: E0131 07:16:24.225562 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:16:24 crc kubenswrapper[4687]: E0131 07:16:24.225905 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:26.225886433 +0000 UTC m=+2012.503146008 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:16:24 crc kubenswrapper[4687]: E0131 07:16:24.225667 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:16:24 crc kubenswrapper[4687]: E0131 07:16:24.226047 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:26.226026527 +0000 UTC m=+2012.503286152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:16:26 crc kubenswrapper[4687]: E0131 07:16:26.254277 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:16:26 crc kubenswrapper[4687]: E0131 07:16:26.254673 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:30.254655835 +0000 UTC m=+2016.531915420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:16:26 crc kubenswrapper[4687]: E0131 07:16:26.254352 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:16:26 crc kubenswrapper[4687]: E0131 07:16:26.254763 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:30.254746107 +0000 UTC m=+2016.532005682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:16:28 crc kubenswrapper[4687]: I0131 07:16:28.684421 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:16:28 crc kubenswrapper[4687]: I0131 07:16:28.684480 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:16:30 crc kubenswrapper[4687]: E0131 07:16:30.311235 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:16:30 crc kubenswrapper[4687]: E0131 07:16:30.311668 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:38.311649084 +0000 UTC m=+2024.588908669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:16:30 crc kubenswrapper[4687]: E0131 07:16:30.311480 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:16:30 crc kubenswrapper[4687]: E0131 07:16:30.311800 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:38.311777777 +0000 UTC m=+2024.589037412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:16:38 crc kubenswrapper[4687]: E0131 07:16:38.319937 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:16:38 crc kubenswrapper[4687]: E0131 07:16:38.319970 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:16:38 crc kubenswrapper[4687]: E0131 07:16:38.320592 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:54.320571642 +0000 UTC m=+2040.597831217 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:16:38 crc kubenswrapper[4687]: E0131 07:16:38.320647 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:16:54.320626413 +0000 UTC m=+2040.597886078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:16:52 crc kubenswrapper[4687]: I0131 07:16:52.862381 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/util/0.log" Jan 31 07:16:53 crc kubenswrapper[4687]: I0131 07:16:53.029062 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/util/0.log" Jan 31 07:16:53 crc kubenswrapper[4687]: I0131 07:16:53.064113 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/pull/0.log" Jan 31 07:16:53 crc kubenswrapper[4687]: I0131 07:16:53.073957 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/pull/0.log" Jan 31 07:16:53 crc kubenswrapper[4687]: I0131 07:16:53.227238 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/extract/0.log" Jan 31 07:16:53 crc kubenswrapper[4687]: I0131 07:16:53.241922 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/pull/0.log" Jan 31 07:16:53 crc kubenswrapper[4687]: I0131 07:16:53.253652 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/util/0.log" Jan 31 07:16:53 crc kubenswrapper[4687]: I0131 07:16:53.432356 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-847c44d56-p7g54_9baebd08-f9ca-4a8c-a12c-2609be678e5c/manager/0.log" Jan 31 07:16:53 crc kubenswrapper[4687]: I0131 07:16:53.457291 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-index-zb8pz_f412fd69-af65-4534-97fc-1ddbd4ec579d/registry-server/0.log" Jan 31 07:16:54 crc kubenswrapper[4687]: E0131 07:16:54.324533 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:16:54 crc kubenswrapper[4687]: E0131 07:16:54.324867 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:17:26.324853696 +0000 UTC m=+2072.602113271 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:16:54 crc kubenswrapper[4687]: E0131 07:16:54.324531 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:16:54 crc kubenswrapper[4687]: E0131 07:16:54.324955 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:17:26.324941688 +0000 UTC m=+2072.602201263 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:16:58 crc kubenswrapper[4687]: I0131 07:16:58.684996 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:16:58 crc kubenswrapper[4687]: I0131 07:16:58.685299 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:16:58 crc kubenswrapper[4687]: I0131 07:16:58.685343 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 07:16:58 crc kubenswrapper[4687]: I0131 07:16:58.685909 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1458eddabbda7dfa359296e6f0341f043ba4048f4cd1df8854c7c04d61090c0a"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:16:58 crc kubenswrapper[4687]: I0131 07:16:58.685967 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://1458eddabbda7dfa359296e6f0341f043ba4048f4cd1df8854c7c04d61090c0a" gracePeriod=600 Jan 31 07:16:59 crc kubenswrapper[4687]: I0131 07:16:59.205501 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="1458eddabbda7dfa359296e6f0341f043ba4048f4cd1df8854c7c04d61090c0a" exitCode=0 Jan 31 07:16:59 crc kubenswrapper[4687]: I0131 07:16:59.205583 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"1458eddabbda7dfa359296e6f0341f043ba4048f4cd1df8854c7c04d61090c0a"} Jan 31 07:16:59 crc kubenswrapper[4687]: I0131 07:16:59.205833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545"} Jan 31 07:16:59 crc kubenswrapper[4687]: I0131 07:16:59.205853 4687 scope.go:117] "RemoveContainer" containerID="6f5eff3e16364b9ca87982dba9c1396a35294f72c5cc56ee899e3c80e259327c" Jan 31 07:17:06 crc kubenswrapper[4687]: I0131 07:17:06.930933 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vsgwh_ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef/control-plane-machine-set-operator/0.log" Jan 31 07:17:07 crc kubenswrapper[4687]: I0131 07:17:07.104152 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kv6zt_8f3171d3-7275-477b-8c99-cae75ecd914c/machine-api-operator/0.log" Jan 31 07:17:07 crc kubenswrapper[4687]: I0131 07:17:07.109581 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kv6zt_8f3171d3-7275-477b-8c99-cae75ecd914c/kube-rbac-proxy/0.log" Jan 31 07:17:08 crc kubenswrapper[4687]: I0131 07:17:08.345244 4687 scope.go:117] "RemoveContainer" containerID="a5c7f5011504f20993daf2b86806422de15bf7e8535d819064047cd5995791d1" Jan 31 07:17:08 crc kubenswrapper[4687]: I0131 07:17:08.408248 4687 scope.go:117] "RemoveContainer" containerID="f2c9eda8abca0dbbeadbfaa1a88b276fc5416fedfc321695a48322da8e838e87" Jan 31 07:17:08 crc kubenswrapper[4687]: I0131 07:17:08.429196 4687 scope.go:117] "RemoveContainer" containerID="ccbf357c32a953a52079ade34a4d95cc4e18dec834e48ee3442cac1445c26404" Jan 31 07:17:08 crc kubenswrapper[4687]: I0131 07:17:08.452899 4687 scope.go:117] "RemoveContainer" containerID="8997d9ed0a7d01fe070901aa7ad1c7cd27ad2fb20f2a3681c1a5a7fbfdb16824" Jan 31 07:17:26 crc kubenswrapper[4687]: E0131 07:17:26.407505 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:17:26 crc kubenswrapper[4687]: E0131 07:17:26.408134 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:18:30.408117941 +0000 UTC m=+2136.685377516 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:17:26 crc kubenswrapper[4687]: E0131 07:17:26.407524 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:17:26 crc kubenswrapper[4687]: E0131 07:17:26.408272 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:18:30.408244305 +0000 UTC m=+2136.685503880 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:17:33 crc kubenswrapper[4687]: I0131 07:17:33.944450 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mlbgs_fafa13d1-be81-401e-bb57-ad4e391192c2/controller/0.log" Jan 31 07:17:33 crc kubenswrapper[4687]: I0131 07:17:33.967610 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mlbgs_fafa13d1-be81-401e-bb57-ad4e391192c2/kube-rbac-proxy/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.109975 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-frr-files/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.305919 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-reloader/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.309897 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-frr-files/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.325287 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-metrics/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.346642 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-reloader/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.524660 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-metrics/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.545775 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-frr-files/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.553615 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-reloader/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.565873 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-metrics/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.794153 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-metrics/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.820379 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-frr-files/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.820438 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-reloader/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.830889 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/controller/0.log" Jan 31 07:17:34 crc kubenswrapper[4687]: I0131 07:17:34.988320 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/kube-rbac-proxy-frr/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.018362 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/frr-metrics/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.019439 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/kube-rbac-proxy/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.182790 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/reloader/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.229395 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-95vth_e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9/frr-k8s-webhook-server/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.424007 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6bc67c7795-gjjmn_56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e/manager/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.614795 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69bb4c5fc8-6rcfd_ad709481-acec-41f1-af1d-3c84b69f7b2f/webhook-server/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.664811 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cqvh6_8cacba96-9df5-43d5-8e68-2a66b3dc0806/kube-rbac-proxy/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.707386 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/frr/0.log" Jan 31 07:17:35 crc kubenswrapper[4687]: I0131 07:17:35.944058 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cqvh6_8cacba96-9df5-43d5-8e68-2a66b3dc0806/speaker/0.log" Jan 31 07:17:47 crc kubenswrapper[4687]: I0131 07:17:47.423280 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstackclient_17078dd3-3694-49b1-8513-fcc5e9af5902/openstackclient/0.log" Jan 31 07:17:58 crc kubenswrapper[4687]: I0131 07:17:58.262437 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/util/0.log" Jan 31 07:17:58 crc kubenswrapper[4687]: I0131 07:17:58.464948 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/pull/0.log" Jan 31 07:17:58 crc kubenswrapper[4687]: I0131 07:17:58.470461 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/util/0.log" Jan 31 07:17:58 crc kubenswrapper[4687]: I0131 07:17:58.524543 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/pull/0.log" Jan 31 07:17:58 crc kubenswrapper[4687]: I0131 07:17:58.698183 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/extract/0.log" Jan 31 07:17:58 crc kubenswrapper[4687]: I0131 07:17:58.703628 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/util/0.log" Jan 31 07:17:58 crc kubenswrapper[4687]: I0131 07:17:58.705133 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/pull/0.log" Jan 31 07:17:58 crc kubenswrapper[4687]: I0131 07:17:58.872476 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-utilities/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.019103 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-utilities/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.042551 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-content/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.054855 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-content/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.235653 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-utilities/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.264095 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-content/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.436911 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-utilities/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.660224 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/registry-server/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.665362 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-content/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.709294 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-content/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.725079 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-utilities/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.830944 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-content/0.log" Jan 31 07:17:59 crc kubenswrapper[4687]: I0131 07:17:59.861240 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-utilities/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.018678 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ff2sf_d11e6dc8-1dc0-442d-951a-b3c6613f938f/marketplace-operator/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.149337 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-utilities/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.329238 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-content/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.347491 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-content/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.384063 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/registry-server/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.387581 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-utilities/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.518215 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-utilities/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.553505 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-content/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.689794 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/registry-server/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.694133 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-utilities/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.882002 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-content/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.903846 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-utilities/0.log" Jan 31 07:18:00 crc kubenswrapper[4687]: I0131 07:18:00.903859 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-content/0.log" Jan 31 07:18:01 crc kubenswrapper[4687]: I0131 07:18:01.060740 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-content/0.log" Jan 31 07:18:01 crc kubenswrapper[4687]: I0131 07:18:01.065231 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-utilities/0.log" Jan 31 07:18:01 crc kubenswrapper[4687]: I0131 07:18:01.486213 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/registry-server/0.log" Jan 31 07:18:08 crc kubenswrapper[4687]: I0131 07:18:08.645366 4687 scope.go:117] "RemoveContainer" containerID="40ca971aa41d8a2a56c8b413d0ce335f0ab64b257eb404beb72f9e78baba2807" Jan 31 07:18:08 crc kubenswrapper[4687]: I0131 07:18:08.696931 4687 scope.go:117] "RemoveContainer" containerID="b423851f07145c278cf65cf0c5aa4a0713dc23590a914473207968afa66ca330" Jan 31 07:18:08 crc kubenswrapper[4687]: I0131 07:18:08.712951 4687 scope.go:117] "RemoveContainer" containerID="50195bf1daec8e92669fa77bbca3ded2dce68fbab162dab6d3681104455abf51" Jan 31 07:18:08 crc kubenswrapper[4687]: I0131 07:18:08.735753 4687 scope.go:117] "RemoveContainer" containerID="ad697f91df00467b940dec87a53e70442fd387730467cfd68d64a9fbafcaff87" Jan 31 07:18:30 crc kubenswrapper[4687]: E0131 07:18:30.411745 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:18:30 crc kubenswrapper[4687]: E0131 07:18:30.412343 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:20:32.412326489 +0000 UTC m=+2258.689586064 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:18:30 crc kubenswrapper[4687]: E0131 07:18:30.412778 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:18:30 crc kubenswrapper[4687]: E0131 07:18:30.412811 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:20:32.412801512 +0000 UTC m=+2258.690061087 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:18:58 crc kubenswrapper[4687]: I0131 07:18:58.684271 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:18:58 crc kubenswrapper[4687]: I0131 07:18:58.684865 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:19:08 crc kubenswrapper[4687]: I0131 07:19:08.806378 4687 scope.go:117] "RemoveContainer" containerID="07d6af67966269759eb0651e4762fdadb801c44d4243ce89de96056707307363" Jan 31 07:19:08 crc kubenswrapper[4687]: I0131 07:19:08.828637 4687 scope.go:117] "RemoveContainer" containerID="ded8f27288ed169650289e3a12e1b2609f051b47914652f26c02f2b572b7ec86" Jan 31 07:19:08 crc kubenswrapper[4687]: I0131 07:19:08.866559 4687 scope.go:117] "RemoveContainer" containerID="0afef5cdc06c693bb8942968a397546f2ad5966a3405887ec65baf694f4987e6" Jan 31 07:19:08 crc kubenswrapper[4687]: I0131 07:19:08.893164 4687 scope.go:117] "RemoveContainer" containerID="868ec0ba8c19d831cc419d77039b4bcd7558ea51858b1d529ff026917af5595c" Jan 31 07:19:18 crc kubenswrapper[4687]: I0131 07:19:18.177759 4687 generic.go:334] "Generic (PLEG): container finished" podID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerID="bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513" exitCode=0 Jan 31 07:19:18 crc kubenswrapper[4687]: I0131 07:19:18.177823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g2ng4/must-gather-twkvd" event={"ID":"b50a6b30-d3a3-4933-8bf2-f520e564d386","Type":"ContainerDied","Data":"bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513"} Jan 31 07:19:18 crc kubenswrapper[4687]: I0131 07:19:18.178952 4687 scope.go:117] "RemoveContainer" containerID="bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513" Jan 31 07:19:18 crc kubenswrapper[4687]: I0131 07:19:18.219861 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g2ng4_must-gather-twkvd_b50a6b30-d3a3-4933-8bf2-f520e564d386/gather/0.log" Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.234287 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g2ng4/must-gather-twkvd"] Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.235794 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-g2ng4/must-gather-twkvd" podUID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerName="copy" containerID="cri-o://d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437" gracePeriod=2 Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.238068 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g2ng4/must-gather-twkvd"] Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.573002 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g2ng4_must-gather-twkvd_b50a6b30-d3a3-4933-8bf2-f520e564d386/copy/0.log" Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.573453 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.647899 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gql5r\" (UniqueName: \"kubernetes.io/projected/b50a6b30-d3a3-4933-8bf2-f520e564d386-kube-api-access-gql5r\") pod \"b50a6b30-d3a3-4933-8bf2-f520e564d386\" (UID: \"b50a6b30-d3a3-4933-8bf2-f520e564d386\") " Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.648010 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b50a6b30-d3a3-4933-8bf2-f520e564d386-must-gather-output\") pod \"b50a6b30-d3a3-4933-8bf2-f520e564d386\" (UID: \"b50a6b30-d3a3-4933-8bf2-f520e564d386\") " Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.655699 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b50a6b30-d3a3-4933-8bf2-f520e564d386-kube-api-access-gql5r" (OuterVolumeSpecName: "kube-api-access-gql5r") pod "b50a6b30-d3a3-4933-8bf2-f520e564d386" (UID: "b50a6b30-d3a3-4933-8bf2-f520e564d386"). InnerVolumeSpecName "kube-api-access-gql5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.715593 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b50a6b30-d3a3-4933-8bf2-f520e564d386-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b50a6b30-d3a3-4933-8bf2-f520e564d386" (UID: "b50a6b30-d3a3-4933-8bf2-f520e564d386"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.749431 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gql5r\" (UniqueName: \"kubernetes.io/projected/b50a6b30-d3a3-4933-8bf2-f520e564d386-kube-api-access-gql5r\") on node \"crc\" DevicePath \"\"" Jan 31 07:19:25 crc kubenswrapper[4687]: I0131 07:19:25.749464 4687 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b50a6b30-d3a3-4933-8bf2-f520e564d386-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.230807 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g2ng4_must-gather-twkvd_b50a6b30-d3a3-4933-8bf2-f520e564d386/copy/0.log" Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.231183 4687 generic.go:334] "Generic (PLEG): container finished" podID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerID="d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437" exitCode=143 Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.231248 4687 scope.go:117] "RemoveContainer" containerID="d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437" Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.231393 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g2ng4/must-gather-twkvd" Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.251057 4687 scope.go:117] "RemoveContainer" containerID="bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513" Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.305806 4687 scope.go:117] "RemoveContainer" containerID="d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437" Jan 31 07:19:26 crc kubenswrapper[4687]: E0131 07:19:26.306192 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437\": container with ID starting with d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437 not found: ID does not exist" containerID="d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437" Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.306229 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437"} err="failed to get container status \"d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437\": rpc error: code = NotFound desc = could not find container \"d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437\": container with ID starting with d57e32e0fdf199790fb2ed2848976453a0736ee6aab15a92007e9558bad55437 not found: ID does not exist" Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.306258 4687 scope.go:117] "RemoveContainer" containerID="bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513" Jan 31 07:19:26 crc kubenswrapper[4687]: E0131 07:19:26.306573 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513\": container with ID starting with bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513 not found: ID does not exist" containerID="bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513" Jan 31 07:19:26 crc kubenswrapper[4687]: I0131 07:19:26.306597 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513"} err="failed to get container status \"bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513\": rpc error: code = NotFound desc = could not find container \"bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513\": container with ID starting with bc3d4110fffdc5f386ee189357ed1f372e66b2b9174a0940a37fb09af47cc513 not found: ID does not exist" Jan 31 07:19:27 crc kubenswrapper[4687]: I0131 07:19:27.612952 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b50a6b30-d3a3-4933-8bf2-f520e564d386" path="/var/lib/kubelet/pods/b50a6b30-d3a3-4933-8bf2-f520e564d386/volumes" Jan 31 07:19:28 crc kubenswrapper[4687]: I0131 07:19:28.683907 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:19:28 crc kubenswrapper[4687]: I0131 07:19:28.683976 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:19:58 crc kubenswrapper[4687]: I0131 07:19:58.683740 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:19:58 crc kubenswrapper[4687]: I0131 07:19:58.684260 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:19:58 crc kubenswrapper[4687]: I0131 07:19:58.684300 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" Jan 31 07:19:58 crc kubenswrapper[4687]: I0131 07:19:58.684856 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545"} pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:19:58 crc kubenswrapper[4687]: I0131 07:19:58.684899 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" containerID="cri-o://e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" gracePeriod=600 Jan 31 07:19:58 crc kubenswrapper[4687]: E0131 07:19:58.822845 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:19:59 crc kubenswrapper[4687]: I0131 07:19:59.442254 4687 generic.go:334] "Generic (PLEG): container finished" podID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" exitCode=0 Jan 31 07:19:59 crc kubenswrapper[4687]: I0131 07:19:59.442300 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerDied","Data":"e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545"} Jan 31 07:19:59 crc kubenswrapper[4687]: I0131 07:19:59.442330 4687 scope.go:117] "RemoveContainer" containerID="1458eddabbda7dfa359296e6f0341f043ba4048f4cd1df8854c7c04d61090c0a" Jan 31 07:19:59 crc kubenswrapper[4687]: I0131 07:19:59.442822 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:19:59 crc kubenswrapper[4687]: E0131 07:19:59.443033 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:20:08 crc kubenswrapper[4687]: I0131 07:20:08.997468 4687 scope.go:117] "RemoveContainer" containerID="afa513251b126ee79e7fc5ce61450365d1fc9a490004cad8921400888003356f" Jan 31 07:20:09 crc kubenswrapper[4687]: I0131 07:20:09.023671 4687 scope.go:117] "RemoveContainer" containerID="8051433f29f81d9091193f67191cacde0216e9cd3220069dbf489487eaa05c08" Jan 31 07:20:09 crc kubenswrapper[4687]: I0131 07:20:09.045176 4687 scope.go:117] "RemoveContainer" containerID="7c90efcf32d96cb6e664df07f9eafde2a35a9d4b4af2f5a6085b97dabefc3e4d" Jan 31 07:20:09 crc kubenswrapper[4687]: I0131 07:20:09.096520 4687 scope.go:117] "RemoveContainer" containerID="dfe225a848b4dcd875b31d396dead41aebd8c8557d0ed6d237318a6400d0cebf" Jan 31 07:20:14 crc kubenswrapper[4687]: I0131 07:20:14.603529 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:20:14 crc kubenswrapper[4687]: E0131 07:20:14.604838 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:20:27 crc kubenswrapper[4687]: I0131 07:20:27.603789 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:20:27 crc kubenswrapper[4687]: E0131 07:20:27.605751 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:20:32 crc kubenswrapper[4687]: E0131 07:20:32.490149 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:20:32 crc kubenswrapper[4687]: E0131 07:20:32.490827 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:34.490798952 +0000 UTC m=+2380.768058547 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:20:32 crc kubenswrapper[4687]: E0131 07:20:32.490222 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:20:32 crc kubenswrapper[4687]: E0131 07:20:32.491591 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:22:34.491561973 +0000 UTC m=+2380.768821578 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:20:38 crc kubenswrapper[4687]: I0131 07:20:38.603899 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:20:38 crc kubenswrapper[4687]: E0131 07:20:38.604427 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:20:50 crc kubenswrapper[4687]: I0131 07:20:50.603637 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:20:50 crc kubenswrapper[4687]: E0131 07:20:50.604332 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:21:04 crc kubenswrapper[4687]: I0131 07:21:04.603334 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:21:04 crc kubenswrapper[4687]: E0131 07:21:04.604547 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:21:15 crc kubenswrapper[4687]: I0131 07:21:15.606604 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:21:15 crc kubenswrapper[4687]: E0131 07:21:15.608958 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:21:28 crc kubenswrapper[4687]: I0131 07:21:28.603855 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:21:28 crc kubenswrapper[4687]: E0131 07:21:28.604709 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:21:43 crc kubenswrapper[4687]: I0131 07:21:43.603533 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:21:43 crc kubenswrapper[4687]: E0131 07:21:43.604359 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.723821 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dccfx"] Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724587 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-server" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724600 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-server" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724611 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-updater" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724618 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-updater" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724624 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724632 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724642 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-expirer" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724648 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-expirer" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724658 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerName="copy" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724663 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerName="copy" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724673 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724678 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724687 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="rsync" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724692 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="rsync" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724700 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-server" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724706 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-server" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724718 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-server" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724724 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-server" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724733 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724740 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724752 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724758 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724769 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-reaper" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724775 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-reaper" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724783 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724789 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724800 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="swift-recon-cron" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724805 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="swift-recon-cron" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724816 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerName="gather" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724821 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerName="gather" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724830 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-updater" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724850 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-updater" Jan 31 07:21:49 crc kubenswrapper[4687]: E0131 07:21:49.724859 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724865 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724955 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-server" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724967 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724983 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerName="copy" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724992 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-reaper" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.724998 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-updater" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725005 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725013 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-server" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725021 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="account-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725030 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725039 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-auditor" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725049 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="rsync" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725057 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-replicator" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725064 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b50a6b30-d3a3-4933-8bf2-f520e564d386" containerName="gather" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725072 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="object-expirer" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725080 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="swift-recon-cron" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725087 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-server" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725095 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3169d5-4ca5-47e8-a6a4-b34705f30dd0" containerName="container-updater" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.725841 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.735178 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dccfx"] Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.779707 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-utilities\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.779779 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-catalog-content\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.779858 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8nld\" (UniqueName: \"kubernetes.io/projected/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-kube-api-access-j8nld\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.880886 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-utilities\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.880955 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-catalog-content\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.881015 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8nld\" (UniqueName: \"kubernetes.io/projected/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-kube-api-access-j8nld\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.881563 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-utilities\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.881722 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-catalog-content\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:49 crc kubenswrapper[4687]: I0131 07:21:49.900954 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8nld\" (UniqueName: \"kubernetes.io/projected/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-kube-api-access-j8nld\") pod \"community-operators-dccfx\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:50 crc kubenswrapper[4687]: I0131 07:21:50.058234 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:21:50 crc kubenswrapper[4687]: I0131 07:21:50.291598 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dccfx"] Jan 31 07:21:51 crc kubenswrapper[4687]: I0131 07:21:51.186857 4687 generic.go:334] "Generic (PLEG): container finished" podID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerID="6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1" exitCode=0 Jan 31 07:21:51 crc kubenswrapper[4687]: I0131 07:21:51.186927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dccfx" event={"ID":"96b29ecd-7dac-4d9f-98f7-f60dce171bfd","Type":"ContainerDied","Data":"6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1"} Jan 31 07:21:51 crc kubenswrapper[4687]: I0131 07:21:51.187203 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dccfx" event={"ID":"96b29ecd-7dac-4d9f-98f7-f60dce171bfd","Type":"ContainerStarted","Data":"fba46d2847a196b353d8332c91ca18e7af8054478d4e96e4faf537f629a6d340"} Jan 31 07:21:51 crc kubenswrapper[4687]: I0131 07:21:51.189117 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:21:52 crc kubenswrapper[4687]: I0131 07:21:52.207085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dccfx" event={"ID":"96b29ecd-7dac-4d9f-98f7-f60dce171bfd","Type":"ContainerStarted","Data":"daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596"} Jan 31 07:21:53 crc kubenswrapper[4687]: I0131 07:21:53.216126 4687 generic.go:334] "Generic (PLEG): container finished" podID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerID="daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596" exitCode=0 Jan 31 07:21:53 crc kubenswrapper[4687]: I0131 07:21:53.216174 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dccfx" event={"ID":"96b29ecd-7dac-4d9f-98f7-f60dce171bfd","Type":"ContainerDied","Data":"daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596"} Jan 31 07:21:54 crc kubenswrapper[4687]: I0131 07:21:54.223228 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dccfx" event={"ID":"96b29ecd-7dac-4d9f-98f7-f60dce171bfd","Type":"ContainerStarted","Data":"8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09"} Jan 31 07:21:54 crc kubenswrapper[4687]: I0131 07:21:54.242180 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dccfx" podStartSLOduration=2.646671145 podStartE2EDuration="5.242158222s" podCreationTimestamp="2026-01-31 07:21:49 +0000 UTC" firstStartedPulling="2026-01-31 07:21:51.188844736 +0000 UTC m=+2337.466104311" lastFinishedPulling="2026-01-31 07:21:53.784331813 +0000 UTC m=+2340.061591388" observedRunningTime="2026-01-31 07:21:54.239675024 +0000 UTC m=+2340.516934599" watchObservedRunningTime="2026-01-31 07:21:54.242158222 +0000 UTC m=+2340.519417817" Jan 31 07:21:56 crc kubenswrapper[4687]: I0131 07:21:56.604050 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:21:56 crc kubenswrapper[4687]: E0131 07:21:56.604859 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:22:00 crc kubenswrapper[4687]: I0131 07:22:00.059468 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:22:00 crc kubenswrapper[4687]: I0131 07:22:00.059906 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:22:00 crc kubenswrapper[4687]: I0131 07:22:00.101761 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:22:00 crc kubenswrapper[4687]: I0131 07:22:00.299916 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:22:00 crc kubenswrapper[4687]: I0131 07:22:00.340454 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dccfx"] Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.801240 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vlb7q/must-gather-fbjnk"] Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.802659 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.806755 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vlb7q"/"default-dockercfg-ftbxr" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.806974 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vlb7q"/"openshift-service-ca.crt" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.813052 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vlb7q/must-gather-fbjnk"] Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.815662 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vlb7q"/"kube-root-ca.crt" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.850475 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrt9d\" (UniqueName: \"kubernetes.io/projected/4c452b55-db4f-41bb-b4e1-be07609e3400-kube-api-access-vrt9d\") pod \"must-gather-fbjnk\" (UID: \"4c452b55-db4f-41bb-b4e1-be07609e3400\") " pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.850559 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c452b55-db4f-41bb-b4e1-be07609e3400-must-gather-output\") pod \"must-gather-fbjnk\" (UID: \"4c452b55-db4f-41bb-b4e1-be07609e3400\") " pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.952298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrt9d\" (UniqueName: \"kubernetes.io/projected/4c452b55-db4f-41bb-b4e1-be07609e3400-kube-api-access-vrt9d\") pod \"must-gather-fbjnk\" (UID: \"4c452b55-db4f-41bb-b4e1-be07609e3400\") " pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.952369 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c452b55-db4f-41bb-b4e1-be07609e3400-must-gather-output\") pod \"must-gather-fbjnk\" (UID: \"4c452b55-db4f-41bb-b4e1-be07609e3400\") " pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.952848 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c452b55-db4f-41bb-b4e1-be07609e3400-must-gather-output\") pod \"must-gather-fbjnk\" (UID: \"4c452b55-db4f-41bb-b4e1-be07609e3400\") " pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:22:01 crc kubenswrapper[4687]: I0131 07:22:01.970470 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrt9d\" (UniqueName: \"kubernetes.io/projected/4c452b55-db4f-41bb-b4e1-be07609e3400-kube-api-access-vrt9d\") pod \"must-gather-fbjnk\" (UID: \"4c452b55-db4f-41bb-b4e1-be07609e3400\") " pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.122542 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.276914 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dccfx" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerName="registry-server" containerID="cri-o://8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09" gracePeriod=2 Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.536011 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vlb7q/must-gather-fbjnk"] Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.580183 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.661711 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8nld\" (UniqueName: \"kubernetes.io/projected/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-kube-api-access-j8nld\") pod \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.661829 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-catalog-content\") pod \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.661857 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-utilities\") pod \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\" (UID: \"96b29ecd-7dac-4d9f-98f7-f60dce171bfd\") " Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.663878 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-utilities" (OuterVolumeSpecName: "utilities") pod "96b29ecd-7dac-4d9f-98f7-f60dce171bfd" (UID: "96b29ecd-7dac-4d9f-98f7-f60dce171bfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.667018 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-kube-api-access-j8nld" (OuterVolumeSpecName: "kube-api-access-j8nld") pod "96b29ecd-7dac-4d9f-98f7-f60dce171bfd" (UID: "96b29ecd-7dac-4d9f-98f7-f60dce171bfd"). InnerVolumeSpecName "kube-api-access-j8nld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.723439 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96b29ecd-7dac-4d9f-98f7-f60dce171bfd" (UID: "96b29ecd-7dac-4d9f-98f7-f60dce171bfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.763924 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.763957 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:02 crc kubenswrapper[4687]: I0131 07:22:02.763967 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8nld\" (UniqueName: \"kubernetes.io/projected/96b29ecd-7dac-4d9f-98f7-f60dce171bfd-kube-api-access-j8nld\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.284167 4687 generic.go:334] "Generic (PLEG): container finished" podID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerID="8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09" exitCode=0 Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.284239 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dccfx" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.284233 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dccfx" event={"ID":"96b29ecd-7dac-4d9f-98f7-f60dce171bfd","Type":"ContainerDied","Data":"8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09"} Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.284363 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dccfx" event={"ID":"96b29ecd-7dac-4d9f-98f7-f60dce171bfd","Type":"ContainerDied","Data":"fba46d2847a196b353d8332c91ca18e7af8054478d4e96e4faf537f629a6d340"} Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.284381 4687 scope.go:117] "RemoveContainer" containerID="8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.287313 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" event={"ID":"4c452b55-db4f-41bb-b4e1-be07609e3400","Type":"ContainerStarted","Data":"89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4"} Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.287344 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" event={"ID":"4c452b55-db4f-41bb-b4e1-be07609e3400","Type":"ContainerStarted","Data":"4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7"} Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.287354 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" event={"ID":"4c452b55-db4f-41bb-b4e1-be07609e3400","Type":"ContainerStarted","Data":"463aadccfb38620f4498d874aa04ad62bf31e1977afac7f468e09b2c43fb6972"} Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.302296 4687 scope.go:117] "RemoveContainer" containerID="daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.308830 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" podStartSLOduration=2.30881399 podStartE2EDuration="2.30881399s" podCreationTimestamp="2026-01-31 07:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:22:03.305038347 +0000 UTC m=+2349.582297922" watchObservedRunningTime="2026-01-31 07:22:03.30881399 +0000 UTC m=+2349.586073565" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.324446 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dccfx"] Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.332739 4687 scope.go:117] "RemoveContainer" containerID="6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.340274 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dccfx"] Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.355058 4687 scope.go:117] "RemoveContainer" containerID="8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09" Jan 31 07:22:03 crc kubenswrapper[4687]: E0131 07:22:03.355574 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09\": container with ID starting with 8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09 not found: ID does not exist" containerID="8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.355619 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09"} err="failed to get container status \"8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09\": rpc error: code = NotFound desc = could not find container \"8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09\": container with ID starting with 8148e5ae459a357bb627e97120fc9efcf7c98aba12db359d19e43dbebcdf4c09 not found: ID does not exist" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.355650 4687 scope.go:117] "RemoveContainer" containerID="daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596" Jan 31 07:22:03 crc kubenswrapper[4687]: E0131 07:22:03.356108 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596\": container with ID starting with daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596 not found: ID does not exist" containerID="daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.356137 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596"} err="failed to get container status \"daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596\": rpc error: code = NotFound desc = could not find container \"daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596\": container with ID starting with daa92fcada9ac535e0db31a067e0637ca4aadd687f752c009fff21632e51d596 not found: ID does not exist" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.356151 4687 scope.go:117] "RemoveContainer" containerID="6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1" Jan 31 07:22:03 crc kubenswrapper[4687]: E0131 07:22:03.356471 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1\": container with ID starting with 6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1 not found: ID does not exist" containerID="6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.356525 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1"} err="failed to get container status \"6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1\": rpc error: code = NotFound desc = could not find container \"6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1\": container with ID starting with 6033f8f36fa46f6140c47c12e74dd07f5bc309af5c0375be24dacbcdd3e3ecd1 not found: ID does not exist" Jan 31 07:22:03 crc kubenswrapper[4687]: I0131 07:22:03.611221 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" path="/var/lib/kubelet/pods/96b29ecd-7dac-4d9f-98f7-f60dce171bfd/volumes" Jan 31 07:22:09 crc kubenswrapper[4687]: I0131 07:22:09.188975 4687 scope.go:117] "RemoveContainer" containerID="90fd08432e434333002676c2a6c96027767e78106a0217fd0ac1f8dba86d32ed" Jan 31 07:22:11 crc kubenswrapper[4687]: I0131 07:22:11.603747 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:22:11 crc kubenswrapper[4687]: E0131 07:22:11.604266 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.301654 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d9dvc"] Jan 31 07:22:18 crc kubenswrapper[4687]: E0131 07:22:18.303109 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerName="registry-server" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.303616 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerName="registry-server" Jan 31 07:22:18 crc kubenswrapper[4687]: E0131 07:22:18.303738 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerName="extract-content" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.303858 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerName="extract-content" Jan 31 07:22:18 crc kubenswrapper[4687]: E0131 07:22:18.303960 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerName="extract-utilities" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.304039 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerName="extract-utilities" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.304249 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b29ecd-7dac-4d9f-98f7-f60dce171bfd" containerName="registry-server" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.305351 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.314877 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d9dvc"] Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.351645 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-catalog-content\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.351753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jsn\" (UniqueName: \"kubernetes.io/projected/a83c92b4-eb84-46fa-8c42-a7093496431f-kube-api-access-p2jsn\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.351822 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-utilities\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.452829 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-utilities\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.452901 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-catalog-content\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.452997 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2jsn\" (UniqueName: \"kubernetes.io/projected/a83c92b4-eb84-46fa-8c42-a7093496431f-kube-api-access-p2jsn\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.454333 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-utilities\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.454651 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-catalog-content\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.475289 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2jsn\" (UniqueName: \"kubernetes.io/projected/a83c92b4-eb84-46fa-8c42-a7093496431f-kube-api-access-p2jsn\") pod \"redhat-marketplace-d9dvc\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:18 crc kubenswrapper[4687]: I0131 07:22:18.631188 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:19 crc kubenswrapper[4687]: I0131 07:22:19.080459 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d9dvc"] Jan 31 07:22:19 crc kubenswrapper[4687]: I0131 07:22:19.377221 4687 generic.go:334] "Generic (PLEG): container finished" podID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerID="c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf" exitCode=0 Jan 31 07:22:19 crc kubenswrapper[4687]: I0131 07:22:19.377295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9dvc" event={"ID":"a83c92b4-eb84-46fa-8c42-a7093496431f","Type":"ContainerDied","Data":"c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf"} Jan 31 07:22:19 crc kubenswrapper[4687]: I0131 07:22:19.377334 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9dvc" event={"ID":"a83c92b4-eb84-46fa-8c42-a7093496431f","Type":"ContainerStarted","Data":"083b95e9a4a40b335cf98325a01edbc7809e74e0ee839e8702511ecc4470c4ca"} Jan 31 07:22:20 crc kubenswrapper[4687]: I0131 07:22:20.387468 4687 generic.go:334] "Generic (PLEG): container finished" podID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerID="dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17" exitCode=0 Jan 31 07:22:20 crc kubenswrapper[4687]: I0131 07:22:20.387600 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9dvc" event={"ID":"a83c92b4-eb84-46fa-8c42-a7093496431f","Type":"ContainerDied","Data":"dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17"} Jan 31 07:22:21 crc kubenswrapper[4687]: I0131 07:22:21.409647 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9dvc" event={"ID":"a83c92b4-eb84-46fa-8c42-a7093496431f","Type":"ContainerStarted","Data":"e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd"} Jan 31 07:22:21 crc kubenswrapper[4687]: I0131 07:22:21.437345 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d9dvc" podStartSLOduration=2.049045445 podStartE2EDuration="3.437301267s" podCreationTimestamp="2026-01-31 07:22:18 +0000 UTC" firstStartedPulling="2026-01-31 07:22:19.379087192 +0000 UTC m=+2365.656346757" lastFinishedPulling="2026-01-31 07:22:20.767342964 +0000 UTC m=+2367.044602579" observedRunningTime="2026-01-31 07:22:21.429197376 +0000 UTC m=+2367.706456951" watchObservedRunningTime="2026-01-31 07:22:21.437301267 +0000 UTC m=+2367.714560842" Jan 31 07:22:25 crc kubenswrapper[4687]: I0131 07:22:25.607601 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:22:25 crc kubenswrapper[4687]: E0131 07:22:25.608240 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:22:28 crc kubenswrapper[4687]: I0131 07:22:28.631678 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:28 crc kubenswrapper[4687]: I0131 07:22:28.632059 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:28 crc kubenswrapper[4687]: I0131 07:22:28.678610 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:29 crc kubenswrapper[4687]: I0131 07:22:29.521153 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:29 crc kubenswrapper[4687]: I0131 07:22:29.564157 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d9dvc"] Jan 31 07:22:31 crc kubenswrapper[4687]: I0131 07:22:31.468631 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d9dvc" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerName="registry-server" containerID="cri-o://e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd" gracePeriod=2 Jan 31 07:22:31 crc kubenswrapper[4687]: I0131 07:22:31.829032 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:31 crc kubenswrapper[4687]: I0131 07:22:31.942358 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-utilities\") pod \"a83c92b4-eb84-46fa-8c42-a7093496431f\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " Jan 31 07:22:31 crc kubenswrapper[4687]: I0131 07:22:31.942503 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-catalog-content\") pod \"a83c92b4-eb84-46fa-8c42-a7093496431f\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " Jan 31 07:22:31 crc kubenswrapper[4687]: I0131 07:22:31.942608 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2jsn\" (UniqueName: \"kubernetes.io/projected/a83c92b4-eb84-46fa-8c42-a7093496431f-kube-api-access-p2jsn\") pod \"a83c92b4-eb84-46fa-8c42-a7093496431f\" (UID: \"a83c92b4-eb84-46fa-8c42-a7093496431f\") " Jan 31 07:22:31 crc kubenswrapper[4687]: I0131 07:22:31.943395 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-utilities" (OuterVolumeSpecName: "utilities") pod "a83c92b4-eb84-46fa-8c42-a7093496431f" (UID: "a83c92b4-eb84-46fa-8c42-a7093496431f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:31 crc kubenswrapper[4687]: I0131 07:22:31.948693 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83c92b4-eb84-46fa-8c42-a7093496431f-kube-api-access-p2jsn" (OuterVolumeSpecName: "kube-api-access-p2jsn") pod "a83c92b4-eb84-46fa-8c42-a7093496431f" (UID: "a83c92b4-eb84-46fa-8c42-a7093496431f"). InnerVolumeSpecName "kube-api-access-p2jsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:22:31 crc kubenswrapper[4687]: I0131 07:22:31.968337 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a83c92b4-eb84-46fa-8c42-a7093496431f" (UID: "a83c92b4-eb84-46fa-8c42-a7093496431f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.044198 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2jsn\" (UniqueName: \"kubernetes.io/projected/a83c92b4-eb84-46fa-8c42-a7093496431f-kube-api-access-p2jsn\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.044244 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.044254 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83c92b4-eb84-46fa-8c42-a7093496431f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.475101 4687 generic.go:334] "Generic (PLEG): container finished" podID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerID="e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd" exitCode=0 Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.475150 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d9dvc" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.475170 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9dvc" event={"ID":"a83c92b4-eb84-46fa-8c42-a7093496431f","Type":"ContainerDied","Data":"e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd"} Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.475596 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9dvc" event={"ID":"a83c92b4-eb84-46fa-8c42-a7093496431f","Type":"ContainerDied","Data":"083b95e9a4a40b335cf98325a01edbc7809e74e0ee839e8702511ecc4470c4ca"} Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.475615 4687 scope.go:117] "RemoveContainer" containerID="e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.492624 4687 scope.go:117] "RemoveContainer" containerID="dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.503072 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d9dvc"] Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.507562 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d9dvc"] Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.533277 4687 scope.go:117] "RemoveContainer" containerID="c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.547811 4687 scope.go:117] "RemoveContainer" containerID="e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd" Jan 31 07:22:32 crc kubenswrapper[4687]: E0131 07:22:32.548263 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd\": container with ID starting with e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd not found: ID does not exist" containerID="e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.548323 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd"} err="failed to get container status \"e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd\": rpc error: code = NotFound desc = could not find container \"e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd\": container with ID starting with e8215de13ec130c9224355af0c82efc390aa5f3a7c56aeea510b9b5c2c0d62bd not found: ID does not exist" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.548351 4687 scope.go:117] "RemoveContainer" containerID="dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17" Jan 31 07:22:32 crc kubenswrapper[4687]: E0131 07:22:32.550575 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17\": container with ID starting with dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17 not found: ID does not exist" containerID="dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.550631 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17"} err="failed to get container status \"dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17\": rpc error: code = NotFound desc = could not find container \"dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17\": container with ID starting with dbfc0521f197331f3267701b75db019f82a2f7a8412f3d48d5767a61742bed17 not found: ID does not exist" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.550659 4687 scope.go:117] "RemoveContainer" containerID="c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf" Jan 31 07:22:32 crc kubenswrapper[4687]: E0131 07:22:32.551028 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf\": container with ID starting with c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf not found: ID does not exist" containerID="c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf" Jan 31 07:22:32 crc kubenswrapper[4687]: I0131 07:22:32.551079 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf"} err="failed to get container status \"c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf\": rpc error: code = NotFound desc = could not find container \"c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf\": container with ID starting with c3e83ea4c4afb1d0d17cffa8c945db4c97ffff782294f336a06c100d4d4f3fbf not found: ID does not exist" Jan 31 07:22:33 crc kubenswrapper[4687]: I0131 07:22:33.609771 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" path="/var/lib/kubelet/pods/a83c92b4-eb84-46fa-8c42-a7093496431f/volumes" Jan 31 07:22:34 crc kubenswrapper[4687]: E0131 07:22:34.575663 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:22:34 crc kubenswrapper[4687]: E0131 07:22:34.575726 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:36.575712546 +0000 UTC m=+2502.852972121 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:22:34 crc kubenswrapper[4687]: E0131 07:22:34.575678 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:22:34 crc kubenswrapper[4687]: E0131 07:22:34.575807 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:24:36.575794629 +0000 UTC m=+2502.853054204 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.344668 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/util/0.log" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.505492 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/pull/0.log" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.510167 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/util/0.log" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.561731 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/pull/0.log" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.603562 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:22:38 crc kubenswrapper[4687]: E0131 07:22:38.603796 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.749125 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/pull/0.log" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.752122 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/util/0.log" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.759525 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926gfvw2_c6cf66be-126e-4ac2-ba8b-165628cd03e7/extract/0.log" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.923440 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-847c44d56-p7g54_9baebd08-f9ca-4a8c-a12c-2609be678e5c/manager/0.log" Jan 31 07:22:38 crc kubenswrapper[4687]: I0131 07:22:38.944054 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-index-zb8pz_f412fd69-af65-4534-97fc-1ddbd4ec579d/registry-server/0.log" Jan 31 07:22:50 crc kubenswrapper[4687]: I0131 07:22:50.603041 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:22:50 crc kubenswrapper[4687]: E0131 07:22:50.603770 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:22:52 crc kubenswrapper[4687]: I0131 07:22:52.048098 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vsgwh_ed057ac3-3e2d-4b0d-bfbf-292bfeb28cef/control-plane-machine-set-operator/0.log" Jan 31 07:22:52 crc kubenswrapper[4687]: I0131 07:22:52.186511 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kv6zt_8f3171d3-7275-477b-8c99-cae75ecd914c/kube-rbac-proxy/0.log" Jan 31 07:22:52 crc kubenswrapper[4687]: I0131 07:22:52.186633 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kv6zt_8f3171d3-7275-477b-8c99-cae75ecd914c/machine-api-operator/0.log" Jan 31 07:23:05 crc kubenswrapper[4687]: I0131 07:23:05.606352 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:23:05 crc kubenswrapper[4687]: E0131 07:23:05.609233 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:23:16 crc kubenswrapper[4687]: I0131 07:23:16.603789 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:23:16 crc kubenswrapper[4687]: E0131 07:23:16.604688 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:23:20 crc kubenswrapper[4687]: I0131 07:23:20.584892 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mlbgs_fafa13d1-be81-401e-bb57-ad4e391192c2/kube-rbac-proxy/0.log" Jan 31 07:23:20 crc kubenswrapper[4687]: I0131 07:23:20.649432 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mlbgs_fafa13d1-be81-401e-bb57-ad4e391192c2/controller/0.log" Jan 31 07:23:20 crc kubenswrapper[4687]: I0131 07:23:20.802015 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-frr-files/0.log" Jan 31 07:23:20 crc kubenswrapper[4687]: I0131 07:23:20.948838 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-metrics/0.log" Jan 31 07:23:20 crc kubenswrapper[4687]: I0131 07:23:20.969925 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-reloader/0.log" Jan 31 07:23:20 crc kubenswrapper[4687]: I0131 07:23:20.978451 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-frr-files/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.012832 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-reloader/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.165218 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-frr-files/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.190552 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-metrics/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.196635 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-reloader/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.235702 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-metrics/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.358143 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-frr-files/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.363376 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-reloader/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.368655 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/cp-metrics/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.428545 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/controller/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.606088 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/frr-metrics/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.619381 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/kube-rbac-proxy/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.620969 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/kube-rbac-proxy-frr/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.770329 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-95vth_e6b3e6b5-b5bc-4cc2-9987-c55bb71c29c9/frr-k8s-webhook-server/0.log" Jan 31 07:23:21 crc kubenswrapper[4687]: I0131 07:23:21.835495 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/reloader/0.log" Jan 31 07:23:22 crc kubenswrapper[4687]: I0131 07:23:22.058792 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6bc67c7795-gjjmn_56dc2d0a-cbd6-46f6-8f16-cbc32771dc3e/manager/0.log" Jan 31 07:23:22 crc kubenswrapper[4687]: I0131 07:23:22.136017 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69bb4c5fc8-6rcfd_ad709481-acec-41f1-af1d-3c84b69f7b2f/webhook-server/0.log" Jan 31 07:23:22 crc kubenswrapper[4687]: I0131 07:23:22.229272 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cqvh6_8cacba96-9df5-43d5-8e68-2a66b3dc0806/kube-rbac-proxy/0.log" Jan 31 07:23:22 crc kubenswrapper[4687]: I0131 07:23:22.253340 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kmhqd_5068efd9-cefe-48eb-96ff-886c9592c7c2/frr/0.log" Jan 31 07:23:22 crc kubenswrapper[4687]: I0131 07:23:22.491347 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cqvh6_8cacba96-9df5-43d5-8e68-2a66b3dc0806/speaker/0.log" Jan 31 07:23:29 crc kubenswrapper[4687]: I0131 07:23:29.603552 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:23:29 crc kubenswrapper[4687]: E0131 07:23:29.604038 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:23:34 crc kubenswrapper[4687]: I0131 07:23:34.180119 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstackclient_17078dd3-3694-49b1-8513-fcc5e9af5902/openstackclient/0.log" Jan 31 07:23:41 crc kubenswrapper[4687]: I0131 07:23:41.603910 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:23:41 crc kubenswrapper[4687]: E0131 07:23:41.604380 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:23:45 crc kubenswrapper[4687]: I0131 07:23:45.480050 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/util/0.log" Jan 31 07:23:45 crc kubenswrapper[4687]: I0131 07:23:45.626030 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/util/0.log" Jan 31 07:23:45 crc kubenswrapper[4687]: I0131 07:23:45.675597 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/pull/0.log" Jan 31 07:23:45 crc kubenswrapper[4687]: I0131 07:23:45.708400 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/pull/0.log" Jan 31 07:23:45 crc kubenswrapper[4687]: I0131 07:23:45.851990 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/util/0.log" Jan 31 07:23:45 crc kubenswrapper[4687]: I0131 07:23:45.876793 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/extract/0.log" Jan 31 07:23:45 crc kubenswrapper[4687]: I0131 07:23:45.877250 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvtgz_838dbbef-88b2-4605-9482-2628852377fa/pull/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.033264 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-utilities/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.181665 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-utilities/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.200745 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-content/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.219910 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-content/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.361871 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-content/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.440158 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/extract-utilities/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.576554 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-utilities/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.760818 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-utilities/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.773669 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwgjc_944e21b2-ebb1-48c3-aaa8-f0264981f380/registry-server/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.789024 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-content/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.835158 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-content/0.log" Jan 31 07:23:46 crc kubenswrapper[4687]: I0131 07:23:46.986784 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-content/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.018525 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/extract-utilities/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.193261 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ff2sf_d11e6dc8-1dc0-442d-951a-b3c6613f938f/marketplace-operator/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.290098 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-utilities/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.531202 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-utilities/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.536861 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2zq5g_824621bb-1ee0-4034-9dfc-d8bc3440757c/registry-server/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.602056 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-content/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.604578 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-content/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.742919 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-utilities/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.755957 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/extract-content/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.918513 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-utilities/0.log" Jan 31 07:23:47 crc kubenswrapper[4687]: I0131 07:23:47.965351 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cnml_48ff7f6a-0a52-4206-9fe1-5177e900634b/registry-server/0.log" Jan 31 07:23:48 crc kubenswrapper[4687]: I0131 07:23:48.084661 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-utilities/0.log" Jan 31 07:23:48 crc kubenswrapper[4687]: I0131 07:23:48.137962 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-content/0.log" Jan 31 07:23:48 crc kubenswrapper[4687]: I0131 07:23:48.148919 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-content/0.log" Jan 31 07:23:48 crc kubenswrapper[4687]: I0131 07:23:48.308776 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-content/0.log" Jan 31 07:23:48 crc kubenswrapper[4687]: I0131 07:23:48.317623 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/extract-utilities/0.log" Jan 31 07:23:48 crc kubenswrapper[4687]: I0131 07:23:48.835428 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vl6d_4cce49bf-11b5-4c33-b241-b829e91eb9a2/registry-server/0.log" Jan 31 07:23:53 crc kubenswrapper[4687]: I0131 07:23:53.603431 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:23:53 crc kubenswrapper[4687]: E0131 07:23:53.603943 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:24:07 crc kubenswrapper[4687]: I0131 07:24:07.604308 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:24:07 crc kubenswrapper[4687]: E0131 07:24:07.605291 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:24:18 crc kubenswrapper[4687]: I0131 07:24:18.604640 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:24:18 crc kubenswrapper[4687]: E0131 07:24:18.605501 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:24:29 crc kubenswrapper[4687]: I0131 07:24:29.603282 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:24:29 crc kubenswrapper[4687]: E0131 07:24:29.604101 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:24:36 crc kubenswrapper[4687]: E0131 07:24:36.621103 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:24:36 crc kubenswrapper[4687]: E0131 07:24:36.621647 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:26:38.62163439 +0000 UTC m=+2624.898893965 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:24:36 crc kubenswrapper[4687]: E0131 07:24:36.621334 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:24:36 crc kubenswrapper[4687]: E0131 07:24:36.621993 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:26:38.621982209 +0000 UTC m=+2624.899241784 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:24:44 crc kubenswrapper[4687]: I0131 07:24:44.603941 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:24:44 crc kubenswrapper[4687]: E0131 07:24:44.604588 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkgkr_openshift-machine-config-operator(c340f403-35a5-4c6d-80b0-2e0fe7399192)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" Jan 31 07:24:59 crc kubenswrapper[4687]: I0131 07:24:59.604374 4687 scope.go:117] "RemoveContainer" containerID="e04db464b8a7e72ed688409686d33641bea20efa6c86d75ea4f7b90776992545" Jan 31 07:25:00 crc kubenswrapper[4687]: I0131 07:25:00.330786 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" event={"ID":"c340f403-35a5-4c6d-80b0-2e0fe7399192","Type":"ContainerStarted","Data":"2eb019227aa7e37fc1d5944d9342c37e8fa62bd43a2f0489e8803e9db4765d3d"} Jan 31 07:25:04 crc kubenswrapper[4687]: I0131 07:25:04.356997 4687 generic.go:334] "Generic (PLEG): container finished" podID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerID="4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7" exitCode=0 Jan 31 07:25:04 crc kubenswrapper[4687]: I0131 07:25:04.357162 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" event={"ID":"4c452b55-db4f-41bb-b4e1-be07609e3400","Type":"ContainerDied","Data":"4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7"} Jan 31 07:25:04 crc kubenswrapper[4687]: I0131 07:25:04.358206 4687 scope.go:117] "RemoveContainer" containerID="4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7" Jan 31 07:25:05 crc kubenswrapper[4687]: I0131 07:25:05.011062 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vlb7q_must-gather-fbjnk_4c452b55-db4f-41bb-b4e1-be07609e3400/gather/0.log" Jan 31 07:25:14 crc kubenswrapper[4687]: I0131 07:25:14.530107 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vlb7q/must-gather-fbjnk"] Jan 31 07:25:14 crc kubenswrapper[4687]: I0131 07:25:14.530846 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" podUID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerName="copy" containerID="cri-o://89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4" gracePeriod=2 Jan 31 07:25:14 crc kubenswrapper[4687]: I0131 07:25:14.535329 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vlb7q/must-gather-fbjnk"] Jan 31 07:25:14 crc kubenswrapper[4687]: I0131 07:25:14.897145 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vlb7q_must-gather-fbjnk_4c452b55-db4f-41bb-b4e1-be07609e3400/copy/0.log" Jan 31 07:25:14 crc kubenswrapper[4687]: I0131 07:25:14.897866 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.012139 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c452b55-db4f-41bb-b4e1-be07609e3400-must-gather-output\") pod \"4c452b55-db4f-41bb-b4e1-be07609e3400\" (UID: \"4c452b55-db4f-41bb-b4e1-be07609e3400\") " Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.012207 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrt9d\" (UniqueName: \"kubernetes.io/projected/4c452b55-db4f-41bb-b4e1-be07609e3400-kube-api-access-vrt9d\") pod \"4c452b55-db4f-41bb-b4e1-be07609e3400\" (UID: \"4c452b55-db4f-41bb-b4e1-be07609e3400\") " Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.019791 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c452b55-db4f-41bb-b4e1-be07609e3400-kube-api-access-vrt9d" (OuterVolumeSpecName: "kube-api-access-vrt9d") pod "4c452b55-db4f-41bb-b4e1-be07609e3400" (UID: "4c452b55-db4f-41bb-b4e1-be07609e3400"). InnerVolumeSpecName "kube-api-access-vrt9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.085047 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c452b55-db4f-41bb-b4e1-be07609e3400-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4c452b55-db4f-41bb-b4e1-be07609e3400" (UID: "4c452b55-db4f-41bb-b4e1-be07609e3400"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.113772 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrt9d\" (UniqueName: \"kubernetes.io/projected/4c452b55-db4f-41bb-b4e1-be07609e3400-kube-api-access-vrt9d\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.113819 4687 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c452b55-db4f-41bb-b4e1-be07609e3400-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.426202 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vlb7q_must-gather-fbjnk_4c452b55-db4f-41bb-b4e1-be07609e3400/copy/0.log" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.426835 4687 generic.go:334] "Generic (PLEG): container finished" podID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerID="89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4" exitCode=143 Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.426886 4687 scope.go:117] "RemoveContainer" containerID="89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.426938 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vlb7q/must-gather-fbjnk" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.443125 4687 scope.go:117] "RemoveContainer" containerID="4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.496682 4687 scope.go:117] "RemoveContainer" containerID="89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4" Jan 31 07:25:15 crc kubenswrapper[4687]: E0131 07:25:15.497101 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4\": container with ID starting with 89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4 not found: ID does not exist" containerID="89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.497133 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4"} err="failed to get container status \"89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4\": rpc error: code = NotFound desc = could not find container \"89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4\": container with ID starting with 89dcd501311e0f5716b5f1550c7fb808321ac7bf1754ae4b204c05e06a98a3c4 not found: ID does not exist" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.497154 4687 scope.go:117] "RemoveContainer" containerID="4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7" Jan 31 07:25:15 crc kubenswrapper[4687]: E0131 07:25:15.497348 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7\": container with ID starting with 4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7 not found: ID does not exist" containerID="4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.497372 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7"} err="failed to get container status \"4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7\": rpc error: code = NotFound desc = could not find container \"4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7\": container with ID starting with 4e3d73c4dac9e275542607db01fa823551a79b18dfa89566d30a4420fdadc3d7 not found: ID does not exist" Jan 31 07:25:15 crc kubenswrapper[4687]: I0131 07:25:15.611549 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c452b55-db4f-41bb-b4e1-be07609e3400" path="/var/lib/kubelet/pods/4c452b55-db4f-41bb-b4e1-be07609e3400/volumes" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.549624 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vjfb2"] Jan 31 07:26:04 crc kubenswrapper[4687]: E0131 07:26:04.550540 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerName="extract-content" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.550556 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerName="extract-content" Jan 31 07:26:04 crc kubenswrapper[4687]: E0131 07:26:04.550571 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerName="copy" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.550580 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerName="copy" Jan 31 07:26:04 crc kubenswrapper[4687]: E0131 07:26:04.550593 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerName="registry-server" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.550601 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerName="registry-server" Jan 31 07:26:04 crc kubenswrapper[4687]: E0131 07:26:04.550614 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerName="gather" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.550622 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerName="gather" Jan 31 07:26:04 crc kubenswrapper[4687]: E0131 07:26:04.550637 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerName="extract-utilities" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.550645 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerName="extract-utilities" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.550764 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerName="gather" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.550784 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c452b55-db4f-41bb-b4e1-be07609e3400" containerName="copy" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.550803 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83c92b4-eb84-46fa-8c42-a7093496431f" containerName="registry-server" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.551776 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.561340 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjfb2"] Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.587573 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj6w8\" (UniqueName: \"kubernetes.io/projected/97408049-e6f3-4dee-9827-42ed94f7ea0a-kube-api-access-gj6w8\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.587991 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-catalog-content\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.588114 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-utilities\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.688742 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-catalog-content\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.688838 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-utilities\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.688896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj6w8\" (UniqueName: \"kubernetes.io/projected/97408049-e6f3-4dee-9827-42ed94f7ea0a-kube-api-access-gj6w8\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.689399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-catalog-content\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.689505 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-utilities\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.709640 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj6w8\" (UniqueName: \"kubernetes.io/projected/97408049-e6f3-4dee-9827-42ed94f7ea0a-kube-api-access-gj6w8\") pod \"certified-operators-vjfb2\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:04 crc kubenswrapper[4687]: I0131 07:26:04.876200 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:05 crc kubenswrapper[4687]: I0131 07:26:05.173312 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjfb2"] Jan 31 07:26:05 crc kubenswrapper[4687]: I0131 07:26:05.764512 4687 generic.go:334] "Generic (PLEG): container finished" podID="97408049-e6f3-4dee-9827-42ed94f7ea0a" containerID="52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6" exitCode=0 Jan 31 07:26:05 crc kubenswrapper[4687]: I0131 07:26:05.764623 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjfb2" event={"ID":"97408049-e6f3-4dee-9827-42ed94f7ea0a","Type":"ContainerDied","Data":"52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6"} Jan 31 07:26:05 crc kubenswrapper[4687]: I0131 07:26:05.765231 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjfb2" event={"ID":"97408049-e6f3-4dee-9827-42ed94f7ea0a","Type":"ContainerStarted","Data":"a2122331ba1a6cecd276db211dcfc43fd7e120d6d392afb686cdb2c91aa56e0d"} Jan 31 07:26:06 crc kubenswrapper[4687]: I0131 07:26:06.772284 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjfb2" event={"ID":"97408049-e6f3-4dee-9827-42ed94f7ea0a","Type":"ContainerStarted","Data":"80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074"} Jan 31 07:26:07 crc kubenswrapper[4687]: I0131 07:26:07.747914 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s9jk2"] Jan 31 07:26:07 crc kubenswrapper[4687]: I0131 07:26:07.749575 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:07 crc kubenswrapper[4687]: I0131 07:26:07.753510 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9jk2"] Jan 31 07:26:07 crc kubenswrapper[4687]: I0131 07:26:07.778011 4687 generic.go:334] "Generic (PLEG): container finished" podID="97408049-e6f3-4dee-9827-42ed94f7ea0a" containerID="80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074" exitCode=0 Jan 31 07:26:07 crc kubenswrapper[4687]: I0131 07:26:07.778051 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjfb2" event={"ID":"97408049-e6f3-4dee-9827-42ed94f7ea0a","Type":"ContainerDied","Data":"80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074"} Jan 31 07:26:07 crc kubenswrapper[4687]: I0131 07:26:07.935301 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8t9\" (UniqueName: \"kubernetes.io/projected/3dcfff15-644c-44e7-8ad5-09ae164b224d-kube-api-access-dp8t9\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:07 crc kubenswrapper[4687]: I0131 07:26:07.935439 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-utilities\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:07 crc kubenswrapper[4687]: I0131 07:26:07.935462 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-catalog-content\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.036285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-utilities\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.036334 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-catalog-content\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.036379 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp8t9\" (UniqueName: \"kubernetes.io/projected/3dcfff15-644c-44e7-8ad5-09ae164b224d-kube-api-access-dp8t9\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.036929 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-catalog-content\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.036929 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-utilities\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.058711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp8t9\" (UniqueName: \"kubernetes.io/projected/3dcfff15-644c-44e7-8ad5-09ae164b224d-kube-api-access-dp8t9\") pod \"redhat-operators-s9jk2\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.112845 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.342077 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9jk2"] Jan 31 07:26:08 crc kubenswrapper[4687]: W0131 07:26:08.345259 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcfff15_644c_44e7_8ad5_09ae164b224d.slice/crio-70c5ce19abe3a75617c9cccac1454a710775075d1763b217243724b56d782755 WatchSource:0}: Error finding container 70c5ce19abe3a75617c9cccac1454a710775075d1763b217243724b56d782755: Status 404 returned error can't find the container with id 70c5ce19abe3a75617c9cccac1454a710775075d1763b217243724b56d782755 Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.785597 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dcfff15-644c-44e7-8ad5-09ae164b224d" containerID="29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f" exitCode=0 Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.785655 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9jk2" event={"ID":"3dcfff15-644c-44e7-8ad5-09ae164b224d","Type":"ContainerDied","Data":"29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f"} Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.785719 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9jk2" event={"ID":"3dcfff15-644c-44e7-8ad5-09ae164b224d","Type":"ContainerStarted","Data":"70c5ce19abe3a75617c9cccac1454a710775075d1763b217243724b56d782755"} Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.788806 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjfb2" event={"ID":"97408049-e6f3-4dee-9827-42ed94f7ea0a","Type":"ContainerStarted","Data":"9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed"} Jan 31 07:26:08 crc kubenswrapper[4687]: I0131 07:26:08.832681 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vjfb2" podStartSLOduration=2.436917201 podStartE2EDuration="4.832660428s" podCreationTimestamp="2026-01-31 07:26:04 +0000 UTC" firstStartedPulling="2026-01-31 07:26:05.766155921 +0000 UTC m=+2592.043415496" lastFinishedPulling="2026-01-31 07:26:08.161899148 +0000 UTC m=+2594.439158723" observedRunningTime="2026-01-31 07:26:08.830944561 +0000 UTC m=+2595.108204136" watchObservedRunningTime="2026-01-31 07:26:08.832660428 +0000 UTC m=+2595.109920003" Jan 31 07:26:09 crc kubenswrapper[4687]: I0131 07:26:09.796715 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9jk2" event={"ID":"3dcfff15-644c-44e7-8ad5-09ae164b224d","Type":"ContainerStarted","Data":"67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f"} Jan 31 07:26:10 crc kubenswrapper[4687]: I0131 07:26:10.807326 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dcfff15-644c-44e7-8ad5-09ae164b224d" containerID="67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f" exitCode=0 Jan 31 07:26:10 crc kubenswrapper[4687]: I0131 07:26:10.807483 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9jk2" event={"ID":"3dcfff15-644c-44e7-8ad5-09ae164b224d","Type":"ContainerDied","Data":"67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f"} Jan 31 07:26:11 crc kubenswrapper[4687]: I0131 07:26:11.820008 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9jk2" event={"ID":"3dcfff15-644c-44e7-8ad5-09ae164b224d","Type":"ContainerStarted","Data":"92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b"} Jan 31 07:26:11 crc kubenswrapper[4687]: I0131 07:26:11.838295 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s9jk2" podStartSLOduration=2.457475848 podStartE2EDuration="4.838280458s" podCreationTimestamp="2026-01-31 07:26:07 +0000 UTC" firstStartedPulling="2026-01-31 07:26:08.787050217 +0000 UTC m=+2595.064309792" lastFinishedPulling="2026-01-31 07:26:11.167854827 +0000 UTC m=+2597.445114402" observedRunningTime="2026-01-31 07:26:11.835678787 +0000 UTC m=+2598.112938362" watchObservedRunningTime="2026-01-31 07:26:11.838280458 +0000 UTC m=+2598.115540033" Jan 31 07:26:14 crc kubenswrapper[4687]: I0131 07:26:14.876380 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:14 crc kubenswrapper[4687]: I0131 07:26:14.876722 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:14 crc kubenswrapper[4687]: I0131 07:26:14.935237 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:15 crc kubenswrapper[4687]: I0131 07:26:15.882797 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:15 crc kubenswrapper[4687]: I0131 07:26:15.920449 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjfb2"] Jan 31 07:26:17 crc kubenswrapper[4687]: I0131 07:26:17.855476 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vjfb2" podUID="97408049-e6f3-4dee-9827-42ed94f7ea0a" containerName="registry-server" containerID="cri-o://9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed" gracePeriod=2 Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.113224 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.113624 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.171306 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.225441 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.370701 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj6w8\" (UniqueName: \"kubernetes.io/projected/97408049-e6f3-4dee-9827-42ed94f7ea0a-kube-api-access-gj6w8\") pod \"97408049-e6f3-4dee-9827-42ed94f7ea0a\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.370743 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-catalog-content\") pod \"97408049-e6f3-4dee-9827-42ed94f7ea0a\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.370890 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-utilities\") pod \"97408049-e6f3-4dee-9827-42ed94f7ea0a\" (UID: \"97408049-e6f3-4dee-9827-42ed94f7ea0a\") " Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.371790 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-utilities" (OuterVolumeSpecName: "utilities") pod "97408049-e6f3-4dee-9827-42ed94f7ea0a" (UID: "97408049-e6f3-4dee-9827-42ed94f7ea0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.377188 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97408049-e6f3-4dee-9827-42ed94f7ea0a-kube-api-access-gj6w8" (OuterVolumeSpecName: "kube-api-access-gj6w8") pod "97408049-e6f3-4dee-9827-42ed94f7ea0a" (UID: "97408049-e6f3-4dee-9827-42ed94f7ea0a"). InnerVolumeSpecName "kube-api-access-gj6w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.421780 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97408049-e6f3-4dee-9827-42ed94f7ea0a" (UID: "97408049-e6f3-4dee-9827-42ed94f7ea0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.472735 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.472766 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj6w8\" (UniqueName: \"kubernetes.io/projected/97408049-e6f3-4dee-9827-42ed94f7ea0a-kube-api-access-gj6w8\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.472777 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97408049-e6f3-4dee-9827-42ed94f7ea0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.863687 4687 generic.go:334] "Generic (PLEG): container finished" podID="97408049-e6f3-4dee-9827-42ed94f7ea0a" containerID="9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed" exitCode=0 Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.863762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjfb2" event={"ID":"97408049-e6f3-4dee-9827-42ed94f7ea0a","Type":"ContainerDied","Data":"9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed"} Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.863778 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjfb2" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.864203 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjfb2" event={"ID":"97408049-e6f3-4dee-9827-42ed94f7ea0a","Type":"ContainerDied","Data":"a2122331ba1a6cecd276db211dcfc43fd7e120d6d392afb686cdb2c91aa56e0d"} Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.864241 4687 scope.go:117] "RemoveContainer" containerID="9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.888458 4687 scope.go:117] "RemoveContainer" containerID="80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.903612 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjfb2"] Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.908206 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.911570 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vjfb2"] Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.913950 4687 scope.go:117] "RemoveContainer" containerID="52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.947848 4687 scope.go:117] "RemoveContainer" containerID="9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed" Jan 31 07:26:18 crc kubenswrapper[4687]: E0131 07:26:18.948382 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed\": container with ID starting with 9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed not found: ID does not exist" containerID="9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.948478 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed"} err="failed to get container status \"9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed\": rpc error: code = NotFound desc = could not find container \"9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed\": container with ID starting with 9c56d857fe22f06a163291e847428269683ad17dc429c428b30a660b5bfdbaed not found: ID does not exist" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.948524 4687 scope.go:117] "RemoveContainer" containerID="80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074" Jan 31 07:26:18 crc kubenswrapper[4687]: E0131 07:26:18.949205 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074\": container with ID starting with 80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074 not found: ID does not exist" containerID="80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.949258 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074"} err="failed to get container status \"80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074\": rpc error: code = NotFound desc = could not find container \"80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074\": container with ID starting with 80c5b3016bbea8d55c89a52fda78f5fa2c3236e19948c074b9dac62adbee2074 not found: ID does not exist" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.949302 4687 scope.go:117] "RemoveContainer" containerID="52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6" Jan 31 07:26:18 crc kubenswrapper[4687]: E0131 07:26:18.949698 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6\": container with ID starting with 52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6 not found: ID does not exist" containerID="52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6" Jan 31 07:26:18 crc kubenswrapper[4687]: I0131 07:26:18.949779 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6"} err="failed to get container status \"52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6\": rpc error: code = NotFound desc = could not find container \"52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6\": container with ID starting with 52a76286970a3e6f6e149b49937ba9b2596ea3c9e09d6fca1b6782dc313d26f6 not found: ID does not exist" Jan 31 07:26:19 crc kubenswrapper[4687]: I0131 07:26:19.614335 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97408049-e6f3-4dee-9827-42ed94f7ea0a" path="/var/lib/kubelet/pods/97408049-e6f3-4dee-9827-42ed94f7ea0a/volumes" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.138035 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9jk2"] Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.138258 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s9jk2" podUID="3dcfff15-644c-44e7-8ad5-09ae164b224d" containerName="registry-server" containerID="cri-o://92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b" gracePeriod=2 Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.490209 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.613279 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-utilities\") pod \"3dcfff15-644c-44e7-8ad5-09ae164b224d\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.613607 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp8t9\" (UniqueName: \"kubernetes.io/projected/3dcfff15-644c-44e7-8ad5-09ae164b224d-kube-api-access-dp8t9\") pod \"3dcfff15-644c-44e7-8ad5-09ae164b224d\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.613697 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-catalog-content\") pod \"3dcfff15-644c-44e7-8ad5-09ae164b224d\" (UID: \"3dcfff15-644c-44e7-8ad5-09ae164b224d\") " Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.614285 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-utilities" (OuterVolumeSpecName: "utilities") pod "3dcfff15-644c-44e7-8ad5-09ae164b224d" (UID: "3dcfff15-644c-44e7-8ad5-09ae164b224d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.618190 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dcfff15-644c-44e7-8ad5-09ae164b224d-kube-api-access-dp8t9" (OuterVolumeSpecName: "kube-api-access-dp8t9") pod "3dcfff15-644c-44e7-8ad5-09ae164b224d" (UID: "3dcfff15-644c-44e7-8ad5-09ae164b224d"). InnerVolumeSpecName "kube-api-access-dp8t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.715241 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.715276 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp8t9\" (UniqueName: \"kubernetes.io/projected/3dcfff15-644c-44e7-8ad5-09ae164b224d-kube-api-access-dp8t9\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.728846 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dcfff15-644c-44e7-8ad5-09ae164b224d" (UID: "3dcfff15-644c-44e7-8ad5-09ae164b224d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.816174 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dcfff15-644c-44e7-8ad5-09ae164b224d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.893903 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dcfff15-644c-44e7-8ad5-09ae164b224d" containerID="92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b" exitCode=0 Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.893956 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9jk2" event={"ID":"3dcfff15-644c-44e7-8ad5-09ae164b224d","Type":"ContainerDied","Data":"92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b"} Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.893990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9jk2" event={"ID":"3dcfff15-644c-44e7-8ad5-09ae164b224d","Type":"ContainerDied","Data":"70c5ce19abe3a75617c9cccac1454a710775075d1763b217243724b56d782755"} Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.894013 4687 scope.go:117] "RemoveContainer" containerID="92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.894021 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9jk2" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.910637 4687 scope.go:117] "RemoveContainer" containerID="67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.928803 4687 scope.go:117] "RemoveContainer" containerID="29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.954745 4687 scope.go:117] "RemoveContainer" containerID="92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b" Jan 31 07:26:21 crc kubenswrapper[4687]: E0131 07:26:21.955188 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b\": container with ID starting with 92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b not found: ID does not exist" containerID="92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.955364 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b"} err="failed to get container status \"92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b\": rpc error: code = NotFound desc = could not find container \"92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b\": container with ID starting with 92dd5b51ac3077e012e855d6812f4074697877d50f2c99b5c45ebcd34f0f151b not found: ID does not exist" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.955391 4687 scope.go:117] "RemoveContainer" containerID="67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f" Jan 31 07:26:21 crc kubenswrapper[4687]: E0131 07:26:21.955710 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f\": container with ID starting with 67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f not found: ID does not exist" containerID="67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.955732 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f"} err="failed to get container status \"67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f\": rpc error: code = NotFound desc = could not find container \"67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f\": container with ID starting with 67bcf8f4f2c2892739f77c9ed796a1c8c8b96c3b934e57fb2bc18120e288e12f not found: ID does not exist" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.955767 4687 scope.go:117] "RemoveContainer" containerID="29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f" Jan 31 07:26:21 crc kubenswrapper[4687]: E0131 07:26:21.956053 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f\": container with ID starting with 29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f not found: ID does not exist" containerID="29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.956095 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f"} err="failed to get container status \"29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f\": rpc error: code = NotFound desc = could not find container \"29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f\": container with ID starting with 29c82d6eb4c4c4b010c1bf92bdffad94483ae908809d3a7b9178a98cb40b7b6f not found: ID does not exist" Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.961216 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9jk2"] Jan 31 07:26:21 crc kubenswrapper[4687]: I0131 07:26:21.966758 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s9jk2"] Jan 31 07:26:23 crc kubenswrapper[4687]: I0131 07:26:23.622019 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dcfff15-644c-44e7-8ad5-09ae164b224d" path="/var/lib/kubelet/pods/3dcfff15-644c-44e7-8ad5-09ae164b224d/volumes" Jan 31 07:26:38 crc kubenswrapper[4687]: E0131 07:26:38.684902 4687 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 07:26:38 crc kubenswrapper[4687]: E0131 07:26:38.686777 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:28:40.686750817 +0000 UTC m=+2746.964010402 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config-secret") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : secret "openstack-config-secret" not found Jan 31 07:26:38 crc kubenswrapper[4687]: E0131 07:26:38.685259 4687 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 07:26:38 crc kubenswrapper[4687]: E0131 07:26:38.687015 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config podName:17078dd3-3694-49b1-8513-fcc5e9af5902 nodeName:}" failed. No retries permitted until 2026-01-31 07:28:40.687003403 +0000 UTC m=+2746.964262988 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/17078dd3-3694-49b1-8513-fcc5e9af5902-openstack-config") pod "openstackclient" (UID: "17078dd3-3694-49b1-8513-fcc5e9af5902") : configmap "openstack-config" not found Jan 31 07:27:28 crc kubenswrapper[4687]: I0131 07:27:28.684321 4687 patch_prober.go:28] interesting pod/machine-config-daemon-hkgkr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:27:28 crc kubenswrapper[4687]: I0131 07:27:28.684904 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkgkr" podUID="c340f403-35a5-4c6d-80b0-2e0fe7399192" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"